In this section, we're going to talk about event monitoring and threat hunting. Before we really get into the idea of event monitoring and threat hunting and more specifically threat hunting, one of the things that we have to know is this concept of know normal. You'll hear it referred to in certain circles as know normal, find evil. But the idea here is you have to understand what is normal network traffic and a host of activity and events for your organization. Once you know what normal looks like, once you have a baseline of how much traffic your organization does on a day or regular day or how much bandwidth a single host would typically consume, or how many failed authentication attempts you have in a typical day. Once you have that baseline, then you can start to detect anomalies. The other thing though is outside of that baselining a host or a network, you need to have a baseline of what operating systems and applications do. What applications or what processes run on Windows 10 versus Windows 7. What are the paths? What are those process paths for those running processes? Again you have to know what that baseline looks like, so you can detect when something doesn't match what it should be. Threat hunters need to sift through anomalous activities and recognize the actual threat. Understanding what are operational activities of the organization is crucial. To accomplish this, a hunting team collaborates with key personnel in and outside of IT to gather valuable information and insight to decide what is a threat and what is unusual but normal activity. This is difficult, in order for you to effectively do this, it requires a lot of fundamental things being placed within your organization that may be outside of the scope of the sock or your security organization as a whole. Your organization is going to have to know how many assets you have, and what type. They are going to have to have the ability to monitor network flow and bandwidth. They're going to have to get logs. You'll see that a lot of organizations have problems onboarding systems set that accurately capture events and logs from individual host. You'll have to have all those bases figured out before you can really effectively start to do threat hunting and know what's normal for your network. SANS is another educational provider, and they make a lot of handy posters. This one here is one for specifically for non-normal detect evil and then it goes over windows processes. What is normal within Windows and what's not. Something that we use in threat hunting, is this concept of the OODA loop, and what it means is Observe, Orient, Decide, and Act. It was actually adopted from the US military. There is an Air Force Colonel by name of John Boyd, he was the one who came with this concept of, if you observe, orient, decide, and act whoever you observe your situation, you orient to analyze the potential predicament given the environment, you decide given the information that you have and you act on that decision and whoever has a quicker OODA loop wins combat. In John Boyd's design with this was originally built for was, fighter pilots, flying jets, and getting into air to air fights with other jets. His theory was whoever's OODA loop is faster wins the fight. they teach this still to pilots, and since then it's been adopted by the cybersecurity community and other security communities, and you can see the breakdown there. That graph is actually John Boyd's OODA loop process. It is a little complex and again, it's designed for fighter pilots but the same idea applies to cyber security in a way that you need to observe and collect logs from IT and security systems. Cross-check the data against existing information so give it that environment contextualization, is it normal? Is it not normal? Decide a course of action according to the instance status, is it a threat? When you decide that it was a threat, we need to come up with a course of action and then act. We need to execute the instant response plan, take measures to prevent similar attacks in the future. This is just a cyclical activity that we do. We continually observe our logs, we analyze them for not normal or evil, essentially evil activities. We decide the course of action, we act on that course of action and then we repeat. Next, we're going to talk about indicators of compromise. Indicators of compromise are pieces of forensics data, such as data system or log entries or files that identify potentially malicious activity on systems or network. Again, we have to know our normal, we have to have that baseline and once we have an abstract or something that is a deviation from that baseline that can be used as maybe a piece of forensics data or something that would trigger and say, hey this might be weird, and this will give me an indicator of compromise. By monitoring for indicators of compromise, organizations can detect attacks and act quickly to prevent breaches from occurring or limit damages by stopping attacks in earlier stages. This is where OODA comes in. We detect it, we observe it, we contextualize it environmentally, that's our orient. We decide what path of our plan of action is, and then we act on it. Best we can do that, we can limit damages up into actually in earlier stages. These bullets here are some common unusual behaviors, you'll see that could be potential indicators of compromise. Unusual outbound activity, do normally not have any outbound traffic or outbound emails after a certain time of day. But all of a sudden you start getting large amounts of traffic going out of your network out of business hours. That would be unusual. That's something that you can investigate. Anomalies in privileged user account activity. Do you see unusually high amount of local admin logins and attempts to install software, move around the network? There'll be some stuff geographical irregularities. Are you getting a lot of traffic from other nations that are potentially alerting for you? The list goes on, this is just a small capture. There's any amount of indicators of compromise. These are just some of the general ideas of what you might see. One of the ways that the network defenders try to communicate [inaudible] between organizations is by using two platforms called STIX and TAXII. STIX is Structured Threat Information Expression, is a language and serialization format used to exchange cyber threat intelligence, that's CTI. STIX enables organizations to share CTI with one another in a consistent and machine-readable manner, allowing security communities better understand what attacks they are most likely to see and to anticipate and/or respond to those attacks faster and more effectively. The idea here is just that this is a common lexicon. When we say vulnerability we all mean the same thing. When we say hash, we all mean the same thing, when we write it up in our JSON or XML. Whatever the case is it's written up in the same format that would be easily shared between applications and organizations and easily ingested by them to be effective. Because again remember OODA loop, we want to be able to exchange information effectively as possible so we can quickly ingest it, analyze it, and react to it. Then TAXII, the Trusted Automated Exchange of Intelligence Information, is an application layer protocol for the communication of cyber threat information. TAXII is used to exchange cyber threat intelligence that's CTI again over HTTPS. We'll talk about this in the next slide. Basically, this is the idea of threat intelligence platforms. If one organization and we have some attack happen, we or forensics analysis, we identify what the attack was, how it happened, what the targets were, we come with our indicators compromise. We can then write those up in STIX format and share them via TAXII basically. Then other organizations can ingest that. That matters a lot if you're in certain interesting industries like financial industries may have certain indicators compromise that really won't apply to finance industries. Things like banks or other financial-specific information. Military organizations are going to have military-specific things that matter to them. There are general things that are going to matter to everybody, but when we talk about certain types of threat actors, APTs things of that nature. It's good to say, hey this APT has a known history of targeting financial institutions, this financial institution which was just under attack. These are the IOCs from that attack. They share it, and then the other financial institutions can immediately ingest it and search through their network for it. Down below here we can see a little bit of a diagram showing what STIX looks like. Indicators compromise, they take threat actors and assign them or take campaigns, assign them to a threat actor, and assign vulnerabilities to the campaign, and this all gets all and shared to our different organizations through usually a threat intelligence platform using TAXII and STIX. More on that IOCs and TIPs, Threat Intelligence Platforms, ingest community and vendor provided IOCs to help you decide the impact in the organization and vertical. By vertical here we mean your industry. You'll see sometimes there will be an attack that will happen, let's say it happens in the financial sector and they're targeting a specific thing in financial sectors, maybe it's ATMs and you work in manufacturing you guys have nothing to do with ATMs, that will help you immediately realize a potential threat impact for you guys maybe it's not that big of an issue. Maybe some of the vulnerabilities that they took advantage of are soon impacting you, but the immediate campaign is something that you don't have to worry about potentially. This is an example of TIPs write up for Poison Ivy malware. On the right there, we can see that Poison Ivy is a remote access Trojan. The idea here is that, once an attacker gets Poison Ivy onto a target system, they can then remote into the system and have remote control and access into that system. Then on the left here we have the indicators for Poison Ivy, so if you have a file that has a SHA-256 hash that matches that hash that you see there on the screen, that's a known hash Poison Ivy. It gives you the pattern type, and it gives you the time this IOC was submitted. You can see this one's quite old. Now, what do we know about hashes and this is a problem you have signature-based detection systems is that a change a one byte in the Poison Ivy malware will modify that hash. Now you get to [inaudible] is obfuscating library calls that changes the hash, and now the IOC isn't exactly the same, so that's the game of cat and mouse that we play as every defender but this is an example of STIX. Up next what we're going to do is, we're going to jump back into our security onion virtual machine. We're going to look at Suricata. We're going to look at some of the rule sets and how to write rule, how to identify potentially bad traffic, and we're just going to take a general look at the Security Onion console