{"id":122104,"date":"2022-09-21T09:00:00","date_gmt":"2022-09-21T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/?p=122104"},"modified":"2023-06-19T10:22:11","modified_gmt":"2023-06-19T17:22:11","slug":"the-art-and-science-behind-microsoft-threat-hunting-part-2","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/security\/blog\/2022\/09\/21\/the-art-and-science-behind-microsoft-threat-hunting-part-2\/","title":{"rendered":"The art and science behind Microsoft threat hunting: Part 2"},"content":{"rendered":"\n
We discussed Microsoft Detection and Response Team\u2019s (DART) threat hunting principles in part 1 of The art and science behind Microsoft threat hunting blog series. In this follow-up post, we will talk about some general hunting strategies, frameworks, tools, and how Microsoft incident responders<\/a> work with threat intelligence.<\/p>\n\n\n\n In DART, we follow a set of threat hunting strategies when our analysts start their investigations. These strategies serve as catalysts for our analysts to conduct deeper investigations. For the purposes of this blog, we are listing these strategies under the assumption that a compromise has been confirmed in the customer\u2019s environment.<\/p>\n\n\n\n An incident response investigation is more manageable when you start off with an initial indicator of compromise (IOC) trigger, or a \u201cknown bad,\u201d to take you to any additional findings. We typically begin with data reduction techniques to limit the data we\u2019re looking at. One example is data stacking, which helps us filter and sort out forensic artifacts by indicator across the enterprise environment until we\u2019ve determined that several machines across the same environment have been confirmed with that same IOC trigger. We then enter the hunting flow and rinse and repeat this process.<\/p>\n\n\n\n Figure 1: The hunting cycle starts with hunting for indicators or \u201cknown bads,\u201d ranging from the smallest unit of indicators to behavioral indicators that may define the actor.<\/em><\/p>\n\n\n\n Types of indicators can be classified into:<\/p>\n\n\n\n Unfortunately, not everything we start out with is interrelated with the trigger IOC. Another hunting strategy we employ is to look for quick wins; in other words, looking for indicators of typical adversary behavior present in a customer environment. Some examples of quick wins include typical actor techniques, actor specific TTPs, known threats, and verified IOCs. Identifying our quick wins is the most impactful to the customer, as it helps us formulate our attack narrative while guiding customers to keep the actor away from the environment.<\/p>\n\n\n\n Figure 2: Hunting order of operations.<\/em><\/p>\n\n\n\n If you\u2019re out of leads, another strategy to employ is pivoting to hunting for anomalies, which draws on information derived from our \u201cknown bads\u201d and quick wins. We discussed anomalies in the first part of this series<\/a> as part of understanding the customer data. Some techniques:<\/p>\n\n\n\n Pure anomaly-based hunting may be performed concurrently with other hunting strategies on a customer engagement, depending on the data we\u2019re presented. This method is incredibly nuanced and requires seasoned experts to verify whether data patterns may encompass normal or \u201cabnormal\u201d behavior. This prevalence checking and data science approach is the most time consuming but can bear some of the most interesting evidence in an investigation. Case in point, we can detect new advanced persistent threat (APT) actor groups and campaigns with anomaly hunting, while they are rarely detected just by searching for the \u201cknown bads.\u201c<\/p>\n\n\n\n Stringing together our patterns of anomalous activity, factual data from quick wins, and analytical opinions must conclude with an attack narrative. In an incident response investigation, the MITRE ATT&CK<\/a> framework serves as a foundation for adversary tactics and techniques based on real-world observations.<\/p>\n\n\n\n The MITRE framework helps us ensure that that we\u2019re looking at our hypothesis in a structured manner to enable us to tell a cohesive narrative to the customer that is rooted in our analysis. We aim to answer questions such as:<\/p>\n\n\n\n Additionally, we want to answer questions surrounding threat actor intent to help tell a better story and build better defenses. Some common attack patterns from the MITRE framework are listed in Table 1.<\/p>\n\n\n\nGeneral hunting strategies<\/h2>\n\n\n\n
Starting with IOCs (\u201cknown bads\u201d)<\/h3>\n\n\n\n
\n
Quick wins<\/h3>\n\n\n\n
Anomaly-based hunting<\/h3>\n\n\n\n
\n
Tying it all together: The attack narrative<\/h3>\n\n\n\n
\n