Detection and protection success stories Insights | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/topic/detection-and-protection-success-stories/ Expert coverage of cybersecurity topics Mon, 30 Mar 2026 16:13:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 How Microsoft Defender protects high-value assets in real-world attack scenarios http://approjects.co.za/?big=en-us/security/blog/2026/03/27/microsoft-defender-protects-high-value-assets/ Fri, 27 Mar 2026 19:53:53 +0000 High-value assets including domain controllers, web servers, and identity infrastructure are frequent targets in sophisticated attacks. Microsoft Defender applies asset-aware protection using Microsoft Security Exposure Management to detect and block threats against these critical systems. This article explores real-world attack scenarios and defense techniques.

The post How Microsoft Defender protects high-value assets in real-world attack scenarios appeared first on Microsoft Security Blog.

]]>

High-value assets including domain controllers, web servers, and identity infrastructure are frequent targets in sophisticated attacks. Microsoft Defender applies asset-aware protection using Microsoft Security Exposure Management to detect and block threats against these critical systems. This article explores real-world attack scenarios and defense techniques.


As cyberthreats continue to grow in scale, speed, and sophistication, organizations must pay close attention to the systems that form their backbone: High-Value Assets (HVAs). These assets include the servers, services, identities, and infrastructure essential for business operations and security. Examples include domain controllers that manage authentication and authorization across the network; web servers hosting business-critical applications such as Exchange or SharePoint; identity systems that enable secure access across on-premises and cloud environments; and other components such as certificate authorities and internet-facing services that provide access to corporate applications.

This reinforces a simple but important idea: not all assets carry the same risk, and protections should reflect their role and impact. To support this, we continue to expand differentiated protections for the assets that matter most. These efforts focus on helping organizations reduce risk, disrupt high-impact attack paths, and strengthen overall resilience. Microsoft Defender already provides enhanced protection for critical assets through capabilities such as automatic attack disruption. In this article, we explore how additional security layers further strengthen risk-based protection.

Using asset context to strengthen detection

In recent years, human-operated cyberattacks have evolved from sporadic, opportunistic intrusions into targeted campaigns designed to maximize impact. Analysis shows that in more than 78% of these attacks, threat actors successfully compromise a High-Value Asset, such as a domain controller, to gain deeper, elevated access within the organization.

Traditional endpoint detection methods rely on behavioral signals such as process execution, command-line activity, and file operations. While effective in many scenarios, these signals often lack context about the asset being targeted. Administrative tools, scripting frameworks, and system utilities can appear identical in both legitimate and malicious use.

This is where understanding a device’s role becomes essential. On high-value assets such as domain controllers or identity infrastructure, even small risks matter because the potential impact is significantly higher. Activities that may be routine on general-purpose servers or administrative workstations can indicate compromise when observed on Tier-0 systems.

Defender incorporates a critical asset framework to enrich detection with this context. This intelligence is powered by Microsoft Security Exposure Management, where critical assets, attack paths, and cross-workload relationships provide the context needed to distinguish normal administrative activity from high-risk behavior. This approach also enables automatic identification of critical assets in customer environments and applies deeper, context-aware detections based on each asset’s risk profile.

How high-value asset protection works

  1. Asset classification: Security Exposure Management asset intelligence builds a high‑confidence inventory and exposure graph of an organization’s assets across devices, identities, cloud resources, and external attack surfaces. By enriching asset data with contextual signals such as predefined classifications and criticality levels based on a system’s role and function, Security Exposure Management can automatically identify and tag High-Value Assets across on-premises, hybrid, and cloud environments, providing a consistent view of the systems that are most critical to the organization.
  2. Real Time Differentiated Intelligence from Cloud: HVA-aware anomaly detection extends cloud delivered protection by continuously learning what normal looks like for critical assets and highlighting activity that meaningfully deviates from those baselines. Instead of applying one size fits all thresholds, the system will evaluate behavior in the context of the asset role, sensitivity, and expected operational patterns.
  3. Endpoint Delivered Protections: Targeted protections that prioritize high-impact TTPs on High-Value Assets. By incorporating device role context and critical asset intelligence from Security Exposure Management, behaviors that may appear as weak signals in isolation can be elevated to high-confidence prevention when observed on Tier-0 systems, enabling more decisive protection where the potential blast radius is greatest.

Real-world high-value asset protection scenarios

Focused protection for domain controllers

Domain controllers are the backbone of on-premises environments, managing identity and access through Active Directory (AD). Because of their central role, threat actors frequently target domain controllers seeking elevated privileges. One common technique involves extracting credential data from NTDS.DIT, the Active Directory database that stores password hashes and account information for users across the domain, including highly privileged accounts such as domain administrators. On systems identified as domain controllers, Defender can apply stronger prevention powered by critical assets and attack paths, combining multiple behavioral signals that would otherwise appear benign in isolation.

Figure-1. High‑value asset protection scenario demonstrating how Microsoft Defender detects and blocks domain controller credential theft using critical asset context.

In one observed incident, the activity begins with the compromise of Machine 0, an internet-exposed server. The threat actor gained a foothold and established persistence to maintain access. This system served as the initial entry point into the environment, allowing the threat actor to begin reconnaissance and identify systems with broader access inside the network. The threat actor then laterally moved to Machine 1, a server with broader access within the network.

On this system, the actor established a reverse SSH tunnel to threat actor-controlled infrastructure while bypassing inbound firewall restrictions and setting up an NTLM relay trap. This positioned the machine to intercept or relay authentication attempts originating from other machines in the network. Subsequently, authentication activity originating from Machine 2, a high-value system with Domain Admin privileges, interacted with the relay setup. By leveraging the captured NTLM authentication exchange, the actor was able to authenticate with elevated privileges within the domain.

Using the leaked Domain Admin access, the threat actor then authenticated to Machine 3, a domain controller. With privileged access to the DC, the actor attempted to extract Active Directory credential data by using ntdsutil.exe to dump the NTDS.DIT database. Protections designed specifically for high‑value assets prevented the command‑line attempt, stopping execution before the database could be accessed. The activity also triggered automated disruption, resulting in the Domain Admin account being disabled, effectively stopping the threat actor from proceeding further with credential extraction and limiting the potential impact to the domain.

In this attack, the adversary remotely created a scheduled task on a domain controller that executed ntdsutil.exe to generate a backup containing the Active Directory database. The task was configured to run as SYSTEM and then deleted shortly afterward to reduce forensic visibility.

Individually, both behaviors, remote scheduled task creation and execution of ntdsutil.exe can occur in administrative scenarios across enterprise environments. However, by analyzing historical activity within the environment, these activities appear as outliers when combined, making it a high-confidence indicator of credential theft preparation on a domain controller. By incorporating asset role, attack path context, historical correlations, and the blast radius of the activity, Defender can deterministically block credential theft preparation on domain controllers. 

Early detection of webshells and IIS compromise

When Defender identifies a high-value asset running the IIS role, it applies targeted inspection to locations that are commonly exposed and frequently abused during server compromise. This includes focused scanning of web-accessible directories and application paths for suspicious or unauthorized script files. In several investigations involving SharePoint and Exchange servers, this approach surfaced previously unknown and highly targeted webshells with poor detection coverage.

In many cases, the malicious logic was inserted directly into legitimate web application files, allowing threat actors to blend into normal application behavior and maintain stealthy access to the server.

Protection tech like AMSI for Exchange and SharePoint helps block malicious code and incoming exploitation attempts. However, if an threat actor already has elevated access inside the organization, they can target these internet-facing High-Value Assets directly. In one scenario, the threat actor had already gained access inside the organization with elevated privileges. From another compromised system, the actor remotely drops a highly customized, previously unseen webshell into EWS directory of Exchange Server.

The webshell has file upload, download and in-memory code execution capabilities. Because the device was identified as an Exchange server hosting internet-facing content, the risk profile was significantly higher. Leveraging this role context, Defender immediately remediated the file upon creation, preventing the threat actor from establishing control over the Exchange workload.

Figure-2. High‑value asset protection diagram showing a threat actor remotely dropping a webshell onto an internet‑facing Exchange server, with Microsoft Defender detecting and immediately remediating the malicious file based on server role and critical asset context.

Expanded protection from remote credential dumping

High‑Value Assets (HVAs) hold the most sensitive credentials in an organization, making them a primary target for adversaries once initial access is achieved. Threat actors often attempt to access credential stores remotely using administrative protocols, directory replication methods, or interactions with identity synchronization systems such as Microsoft Entra Connect.

These activities can involve the movement or staging of sensitive artifacts, including Active Directory database files, registry hives, or identity synchronization data. Suspicious patterns such as creation of credential-related files in non-standard locations or unexpected transfers between systems may indicate attempts to compromise credentials. Incorporating device role context enables stronger protections on the systems where credential exposure poses the highest risk, such as domain controllers and identity infrastructure servers. By considering the process chains and access patterns involved, Defender can more effectively prevent exfiltration of sensitive credential data.

Protecting your HVAs

While Microsoft’s Security Exposure Management continues to improve automatic identification and classification of high‑value assets (HVAs) in customer environments, customers can take several concrete steps today to strengthen protection outcomes.

1. Ensure coverage across all critical assets

Review environments to confirm that all truly high‑value assets are identified, including assets that may not be obvious by type alone (for example, servers running privileged services or machines holding sensitive credentials). Gaps in classification can lead to gaps in protection prioritization.

2. Prioritize security posture improvements and alert response for HVAs

Customers should focus first on implementing security posture recommendations that apply to high-value assets, as these systems represent the greatest potential impact if compromised. Addressing gaps on HVAs delivers disproportionately higher risk reduction compared to non-critical assets.

In addition, organizations should prioritize monitoring and rapid response for alerts originating from HVAs. Accelerating investigation and remediation for these alerts helps mitigate threats in a timely manner and significantly limits potential blast radius.

3. Triage vulnerabilities with HVA context

When reviewing vulnerabilities, prioritize remediation on HVAs before lower‑impact assets. A moderate vulnerability on a high‑value asset might present greater risk than a high‑severity issue on a non‑critical endpoint.

Learn more

Explore these resources to stay updated on the latest updates

To get notified about new publications and to join discussions on social media, follow us on LinkedIn, X (formerly Twitter), and Bluesky

To hear stories and insights from the Microsoft Threat Intelligence community about the ever-evolving threat landscape, listen to the Microsoft Threat Intelligence podcast.

The post How Microsoft Defender protects high-value assets in real-world attack scenarios appeared first on Microsoft Security Blog.

]]>
Case study: How predictive shielding in Defender stopped GPO-based ransomware before it started http://approjects.co.za/?big=en-us/security/blog/2026/03/23/case-study-predictive-shielding-defender-stopped-gpo-based-ransomware-before-started/ Mon, 23 Mar 2026 16:00:00 +0000 Microsoft Defender stopped a human-operated ransomware attack that abused Group Policy Objects (GPOs) to disable defenses and push encryption at scale. This case study breaks down the attacker’s playbook and shows how predictive shielding hardened 700 devices in time, resulting in zero GPO-based encryptions and blocking most of the attempted impact.

The post Case study: How predictive shielding in Defender stopped GPO-based ransomware before it started appeared first on Microsoft Security Blog.

]]>

Summary

  • Microsoft Defender disrupted a human operated ransomware incident targeting a large educational institution with more than a couple of thousand devices.
  • The attacker attempted to weaponize Group Policy Objects (GPOs) to tamper with security controls and distribute ransomware via scheduled tasks.
  • Defender’s predictive shielding detected the attack before ransomware was deployed and proactively hardened against malicious GPO propagation across 700 devices.
  • Defender blocked ~97% of the attacker’s attempted encryption activity in total, and zero machines were encrypted via the GPO path.

The growing threat: GPO abuse in ransomware operations

Modern ransomware operators have evolved well beyond simple payload delivery. Today’s attackers understand enterprise infrastructure intimately. They actively exploit the administrative mechanisms that organizations depend on to both neutralize security products and distribute ransomware at scale.

Group Policy Objects (GPOs) have become a favored tool for exactly this purpose. GPOs are a built-in, trusted mechanism for pushing configuration changes across domain-joined devices. Attackers have learned to abuse them: pushing tampering configurations to disable security tools, deploying scheduled tasks that distribute and execute ransomware, and achieving wide organizational impact without needing to touch each machine individually.

In this blog, we examine a real incident where an attacker weaponized GPOs in exactly this way, and how Defender’s predictive shielding responded by catching the attack before the ransomware was even deployed.

The incident

The target was a large educational institution with approximately more than a couple of thousand devices onboarded to Microsoft Defender and the full Defender suite deployed. The infrastructure included 33 servers, 11 domain controllers, and 2 Entra Connect servers.

Attack chain overview

The attacker’s progression through the environment was methodical:

Initial Access and Privilege Escalation: The attacker began operating from an unmanaged device. At this stage, one Domain Admin account had already been compromised. Due to limited visibility, the initial access vector and the method used to obtain Domain Admin privileges remain unknown.

Day 1: Reconnaissance: The attacker began reconnaissance activity using AD Explorer for Active Directory enumeration and brute force techniques to map the environment. Defender generated alerts in response to these activities.

Day 2: Credential Access and Lateral Movement: The attacker obtained credentials for multiple high privilege accounts, with Kerberoasting and NTDS dump activity observed leading up to this point. During this phase, the attacker also created multiple local accounts on compromised systems to establish additional persistent access. Using some of the acquired credentials, the attacker then began moving laterally within the network.

During these activities, Defender initiated attack disruption against five compromised accounts. This action caused the attacker’s lateral movement attempts to be blocked at scale, resulting in thousands of blocked authentication and access attempts and a significant slowdown of the attack.

With attack disruption in place, the attacker’s progress was significantly constrained at this stage, limiting lateral movement and preventing rapid escalation. Without this intervention, the customer would have faced a far more severe outcome.

Day 5: Defense Evasion and Impact: While some accounts were disrupted and blocked, the attacker was still able to leverage additional privileged accounts still in their hands. Using these accounts, the attacker transitioned to the impact phase and leveraged Group Policy as the primary distribution mechanism.
Just prior to the ransomware deployment, the attacker used GPO to propagate a tampering policy that disabled Defender protections.

Ransomware payload was then distributed via GPO, while in parallel, the attacker executed additional remote ransomware operations, delivering the payload over SMB using multiple compromised accounts.

A second round of attack disruption was initiated by Defender as a reaction to this new stage, this time alongside predictive shielding. More than a dozen compromised entities were disrupted, together with GPO hardening, ultimately neutralizing the attack and preventing the attacker from making any further progress.

Deep dive: How the attacker weaponized group policy and how predictive shielding stopped the attack.

Step 1: Tampering with security controls

The attacker’s first move was to create a malicious GPO designed to tamper with endpoint security controls. The policy disabled key Defender protections, including behavioral monitoring and real-time protection, with the goal of weakening defenses ahead of ransomware deployment.

This tampering attempt triggered a Defender tampering alert. In response, predictive shielding activated GPO hardening, temporarily pausing the propagation of new GPO policies, across all MDE onboarded devices reachable from the attacker’s standpoint – achieved protection of ~85% of devices against the tampering policy.

Step 2: Ransomware distribution via scheduled tasks

Approximately ten minutes after creating the tampering GPO, without being aware of Defender’s GPO Hardening policy being deployed and activated, the attacker attempted to proceed with the next stage of the attack: ransomware payload distribution[EF2] .

  • The attacker placed three malicious files: run.bat, run.exe and run.dll under the SYSVOL share. These files were responsible for deploying the ransomware payload.
  • A second malicious GPO was created to configure a scheduled task on targeted devices.
  • The scheduled task copied the payload files locally and executed them using the following chain:
     cmd /c start run.bat → cmd /c c:\users\…\run.exe → rundll32 c:\users\…\run.dll Encryptor

This approach is effective because each device pulls the payload to itself through the scheduled task. The attacker sets the GPO once, and the devices do the rest. It’s a self-service distribution model that leverages the infrastructure the organization depends on.

Because GPO hardening had already been applied during the tampering stage, by the time the attacker created the ransomware GPO ten minutes later, the environment was already hardened. The system recognized that GPO tampering is a precursor to ransomware distribution and acted preemptively. The system didn’t wait for ransomware to appear. It acted on what the attacker was about to do.

The results

The numbers speak for themselves:

  • Zero machines were encrypted via the GPO path.
  • Roughly 97% of devices the attacker attempted to encrypt were fully protected by Defender. A limited number of devices experienced encryption during concurrent ransomware activity over SMB; however, attack disruption successfully contained the incident and stopped further impact.
  • 700 devices applied the predictive shielding GPO hardening policy, reflecting the attacker’s broad targeting scope, and blocking the propagation of the malicious policy set by the attacker within approximately 3 hours.

The hardening dilemma: Why threat actors love operational mechanisms

Enterprise environments rely on administrative mechanisms such as Group Policy, scheduled tasks, and remote management tools to manage and automate operations at scale. These capabilities are highly privileged and widely trusted, making them a natural part of everyday IT workflows. Because they are designed for legitimate administration and automation, attackers increasingly target them as a low-friction way to disable defenses and distribute malware using the same tools administrators use every day.

This creates a fundamental asymmetry. Defenders must keep these mechanisms open for legitimate use, while attackers exploit that very openness. Attackers increasingly pivot toward IT management mechanisms precisely because they can’t be hardened all the time. GPO changes are treated as legitimate administrative activity. Scheduled tasks are a normal OS function. SYSVOL and NETLOGON must remain accessible to every domain-joined device.

Traditional security approaches all fall short here. Always-on hardening breaks operations. Detection-only is too late, because by the time an alert fires, ransomware may already be distributed across the environment. Manual SOC intervention can’t keep pace with an attacker operating in minutes. This is the gap that predictive shielding is designed to close.

Predictive shielding: Contextual, just-in-time hardening

Predictive shielding is built on two pillars. The first is prediction: Defender correlates activity signals, threat intelligence, and exposure topology to infer what the attacker is likely to do next and which assets are realistically reachable. The second is enforcement: targeted, temporary controls are applied to disrupt the predicted attack path in real time.

This is a fundamentally different approach to protection: adaptive, risk-conditioned enforcement, with controls that are scoped to the blast radius, temporary, and contextual. Instead of relying on always-on controls or reacting after damage occurs, Defender applies these targeted, temporary protections only when concrete risk signals indicate an attack is unfolding.

Closing the gap

Operational mechanisms like GPO can’t be permanently hardened, and that is exactly why threat actors pivot toward them. Predictive shielding closes this gap with contextual, just-in-time hardening that acts on predicted attacker intent rather than waiting for the attack to materialize.

In this case, predictive shielding caught the attacker at the tampering stage and prevented ransomware from spreading through a malicious GPO.
700 devices were saved from encryption, achieving a 97% protection rate.
The remaining devices were encrypted through rapid remote SMB-based ransomware deployment, after which attack disruption successfully contained the incident and stopped further propagation.
Zero machines applied the attacker’s malicious ransomware deployment GPO, preventing widespread encryption and saving the customer from significant recovery costs, operational downtime, and data loss.

MITRE ATT&CK® techniques observed

The table below maps observed behaviors to ATT&CK. (Tactics shown are per technique definition) 

Tactic(s)Technique IDTechnique nameObserved details
DiscoveryT1087.002Account Discovery: Domain AccountThe attacker used AD Explorer to enumerate Active Directory objects and domain accounts during initial reconnaissance.
Credential AccessT1110Brute ForceDuring early reconnaissance, the attacker used brute force techniques.
Credential AccessT1558.003Steal or Forge Kerberos Tickets: KerberoastingKerberoasting activity was observed prior to the attacker obtaining multiple high-privilege credentials.
Credential AccessT1003.003OS Credential Dumping: NTDSNTDS dump activity was observed as part of credential harvesting prior to the attacker obtaining multiple high-privilege credentials.
PersistenceT1136.001Create Account: Local AccountThe attacker created multiple new local accounts on compromised systems to establish additional persistent access prior to ransomware deployment.
Lateral MovementT1021.002Remote Services: SMB/Windows Admin SharesUsing stolen high-privilege credentials, the attacker moved laterally across systems in the environment.
PersistenceT1484.001Domain Policy Modification: Group Policy ModificationThe attacker created malicious Group Policy Objects to modify security configurations and deploy ransomware at scale.
Defense EvasionT1562.001Impair Defenses: Disable or Modify ToolsA malicious Group Policy Object was created to disable Defender protections, including real-time protection and behavioral monitoring.
ExecutionT1053.005Scheduled Task/Job: Scheduled TaskThe attacker used Group Policy to create scheduled tasks that copied ransomware payload files from SYSVOL share and executed them on target devices.
ExecutionT1059.003Command and Scripting Interpreter: Windows Command ShellCommand line instructions (cmd /c) were used within scheduled tasks to copy and launch the ransomware payload.
ExecutionT1218.011System Binary Proxy Execution: Rundll32The ransomware execution chain used rundll32.exe to execute the malicious DLL payload.
ImpactT1486Data Encrypted for ImpactRansomware deployment via Group Policy was attempted along with remote ransomware operations.

References

This research is provided by Microsoft Defender Security Research with contributions from Tal Tzhori and Aviv Sharon.

Learn more   

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

The post Case study: How predictive shielding in Defender stopped GPO-based ransomware before it started appeared first on Microsoft Security Blog.

]]>
Case study: Securing AI application supply chains http://approjects.co.za/?big=en-us/security/blog/2026/01/30/case-study-securing-ai-application-supply-chains/ Fri, 30 Jan 2026 18:49:44 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145012 Securing AI-powered applications requires more than just safeguarding prompts. Organizations must adopt a holistic approach that includes monitoring the AI supply chain, assessing frameworks, SDKs, and orchestration layers for vulnerabilities, and enforcing strong runtime controls for agents and tools. Leveraging visibility into these components allows security teams to detect, respond to, and remediate risks before they can be exploited.

The post Case study: Securing AI application supply chains appeared first on Microsoft Security Blog.

]]>
The rapid adoption of AI applications, including agents, orchestrators, and autonomous workflows, represents a significant shift in how software systems are built and operated. Unlike traditional applications, these systems are active participants in execution. They make decisions, invoke tools, and interact with other systems on behalf of users. While this evolution enables new capabilities, it also introduces an expanded and less familiar attack surface.

Security discussions often focus on prompt-level protections, and that focus is justified. However, prompt security addresses only one layer of risk. Equally important is securing the AI application supply chain, including the frameworks, SDKs, and orchestration layers used to build and operate these systems. Vulnerabilities in these components can allow attackers to influence AI behavior, access sensitive resources, or compromise the broader application environment.

The recent disclosure of CVE-2025-68664, known as LangGrinch, in LangChain Core highlights the importance of securing the AI supply chain. This blog uses that real-world vulnerability to illustrate how Microsoft Defender posture management capabilities can help organizations identify and mitigate AI supply chain risks.

Case example: Serialization injection in LangChain (CVE-2025-68664)

A recently disclosed vulnerability in LangChain Core highlights how AI frameworks can become conduits for exploitation when workloads are not properly secured. Tracked as CVE-2025-68664 and commonly referred to as LangGrinch, this flaw exposes risks associated with insecure deserialization in agentic ecosystems that rely heavily on structured metadata exchange.

Vulnerability summary

CVE-2025-68664 is a serialization injection vulnerability affecting the langchain-core Python package. The issue stems from improper handling of internal metadata fields during the serialization and deserialization process. If exploited, an attacker could:

  • Extract secrets such as environment variables without authorization
  • Instantiate unintended classes during object reconstruction
  • Trigger side effects through malicious object initialization

The vulnerability carries a CVSS score of 9.3, highlighting the risks that arise when AI orchestration systems do not adequately separate control signals from user-supplied data.

Understanding the root cause: The lc marker

LangChain utilizes a custom serialization format to maintain state across different components of an AI chain. To distinguish between standard data and serialized LangChain objects, the framework uses a reserved key called lc. During deserialization, when the framework encounters a dictionary containing this key, it interprets the content as a trusted object rather than plain user data.

The vulnerability originates in the dumps() and dumpd() functions in affected versions of the langchain-core package. These functions did not properly escape or neutralize the lc key when processing user-controlled dictionaries. As a result, if an attacker is able to inject a dictionary containing the lc key into a data stream that is later serialized and deserialized, the framework may reconstruct a malicious object.

This is a classic example of an injection flaw where data and control signals are not properly separated, allowing untrusted input to influence the execution flow.

Mitigation and protection guidance

Microsoft recommends that all organizations using LangChain review their deployments and apply the following mitigations immediately.

1. Update LangChain Core

The most effective defense is to upgrade to a patched version of the langchain-core package.

  • For 0.3.x users: Update to version 0.3.81 or later.
  • For 1.x users: Update to version 1.2.5 or later.

2. Query the security explorer to identify any instances of LangChain in your environment

To identify instances of LangChain package in the assets protected by Defender for Cloud, customers can use the Cloud Security Explorer:

*Identification in cloud compute resources requires Defender CSPM / Defender for Containers / Defender for Servers plan.

*Identification in code environment requires connecting your code environment to Defender for Cloud Learn how to set up connectors

3. Remediate based on Defender for Cloud recommendations across the software development cycle: Code, Ship, Runtime

*Identification in cloud compute resources requires Defender CSPM / Defender for Containers / Defender for Servers plan.

*Identification in code environment requires connecting your code environment to Defender for Cloud Learn how to set up connectors

4. Create GitHub issues with runtime context directly from Defender for Cloud, track progress, and use Copilot coding agent for AI-powered automated fix

Learn more about Defender for Cloud seamless workflows with GitHub to shorten remediation times for security issues.

Microsoft Defender XDR detections 

Microsoft security products provide several layers of defense to help organizations identify and block exploitation attempts related to AI vulnerable software.  

Microsoft Defender provides visibility into vulnerable AI workloads through its Cloud Security Posture Management (Defender CSPM).

Vulnerability Assessment: Defender for Cloud scanners have been updated to identify containers and virtual machines running vulnerable versions of langchain-core. Microsoft Defender is actively working to expand coverage to additional platforms and this blog will be updated when more information is available.

Hunting queries   

Microsoft Defender XDR

Security teams can use the advanced hunting capabilities in Microsoft Defender XDR to proactively look for indicators of exploitation. A common sign of exploitation is a Python process associated with LangChain attempting to access sensitive environment variables or making unexpected network connections immediately following an LLM interaction.

The following Kusto Query Language (KQL) query can be used to identify devices that are using the vulnerable software:

DeviceTvmSoftwareInventory
| where SoftwareName has "langchain" 
    and (
        // Lower version ranges
        SoftwareVersion startswith "0." 
        and toint(split(SoftwareVersion, ".")[1]) < 3 
        or (SoftwareVersion hasprefix "0.3." and toint(split(SoftwareVersion, ".")[2]) < 81)
        // v1.x affected before 1.2.5
        or (SoftwareVersion hasprefix "1." 
            and (
                // 1.0.x or 1.1.x
                toint(split(SoftwareVersion, ".")[1]) < 2
                // 1.2.0-1.2.4
                or (
                    toint(split(SoftwareVersion, ".")[1]) == 2
                    and toint(split(SoftwareVersion, ".")[2]) < 5
                )
            )
        )
    )
| project DeviceName, OSPlatform, SoftwareName, SoftwareVersion

References

This research is provided by Microsoft Defender Security Research with contributions from Tamer Salman, Astar Lev, Yossi Weizman, Hagai Ran Kestenberg, and Shai Yannai.

Learn more  

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.  

Learn more about securing Copilot Studio agents with Microsoft Defender 

Learn more about Protect your agents in real-time during runtime (Preview) – Microsoft Defender for Cloud Apps | Microsoft Learn  

Explore how to build and customize agents with Copilot Studio Agent Builder  

The post Case study: Securing AI application supply chains appeared first on Microsoft Security Blog.

]]>
Turning threat reports into detection insights with AI http://approjects.co.za/?big=en-us/security/blog/2026/01/29/turning-threat-reports-detection-insights-ai/ Thu, 29 Jan 2026 21:20:18 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=144970 Security teams often spend days manually turning long incident reports and threat writeups into actionable detections by extracting TTPs. This blog post shows an AI-assisted workflow that does the same job in minutes. It extracts the TTPs, maps them to existing detection coverage, and flags potential gaps. Defenders can respond faster, with human experts still reviewing and validating the results.

The post Turning threat reports into detection insights with AI appeared first on Microsoft Security Blog.

]]>
Security teams routinely need to transform unstructured threat knowledge, such as incident narratives, red team breach-path writeups, threat actor profiles, and public reports into concrete defensive action. The early stages of that work are often the slowest. These include extracting tactics, techniques, and procedures (TTPs) from long documents, mapping them to a standard taxonomy, and determining which TTPs are already covered by existing detections versus which represent potential gaps.

Complex documents that mix prose, tables, screenshots, links, and code make it easy to miss key details. As a result, manual analysis can take days or even weeks, depending on the scope and telemetry involved.

This post outlines an AI-assisted workflow for detection analysis designed to accelerate detection engineering. The workflow generates a structured initial analysis from common security content, such as incident reports and threat writeups. It extracts candidate TTPs from the content, validates those TTPs, and normalizes them to a consistent format, including alignment with the MITRE ATT&CK framework.

The workflow then performs coverage and gap analysis by comparing the extracted TTPs against an existing detection catalog. It combines similarity search with LLM-based validation to improve accuracy. The goal is to give defenders a high-quality starting point by quickly surfacing likely coverage areas and potential detection gaps.

This approach saves time and allows analysts to focus where they add the most value: validating findings, confirming what telemetry actually captures, and implementing or tuning detections.

Technical details

Figure 1: Overall flow of the analysis.

Figure 1: Overall flow of the analysis

Figure 1 illustrates the overall architecture of the workflow for analyzing threat data. The system accepts multiple content types and processes them through three main stages: TTP extraction, MITRE ATT&CK mapping, and detection coverage analysis.

The workflow ingests artifacts that describe adversary behavior, including documents and web-based content. These artifacts include:

  • Red team reports
  • Threat intelligence (TI) reports
  • Threat actor (TA) profiles.

The system supports multiple content formats, allowing teams to process both internal and external reports without manual reformatting.

During ingestion, the system breaks each document into machine-readable segments, such as text blocks, headings, and lists. It retains the original document structure to preserve context. This is important because the location of information, such as whether it appears in an appendix or in key findings, can affect how the data is interpreted. This is especially relevant for long reports that combine narrative text with supporting evidence.

1) TTP and metadata extraction

The first major technical step extracts candidate TTPs from the ingested content. The workflow identifies technique-like behaviors described in free text and converts them into a structured format for review and downstream mapping.

The system uses specialized Large Language Model (LLM) prompts to extract this information from raw content. In addition to candidate TTPs, the system extracts supporting metadata, including:

  • Relevant cloud stack layers
  • Detection opportunities
  • Telemetry required for detection authoring

2) MITRE ATT&CK mapping

The system validates MITRE ATT&CK mappings by normalizing extracted behaviors to specific technique identifiers and names. This process highlights areas of uncertainty for review and correction, helping standardize visibility into attack observations and potential protection gaps.

The goal is to map all relevant layers, including tactics, techniques, and sub-techniques, by assigning each extracted TTP to the appropriate level of the MITRE ATT&CK hierarchy. Each TTP is mapped using a single LLM call with Retrieval Augmented Generation (RAG). To maintain accuracy, the system uses a focused, one-at-a-time approach to mapping.

3) Existing detections mapping and gap analysis

A key workflow step is mapping extracted TTPs against existing detections to determine which behaviors are already covered and where gaps may exist. This allows defenders to assess current coverage and prioritize detection development or tuning efforts.

Figure 2: Detection Mapping Process.

Figure 2 illustrates the end-to-end detection mapping process. This phase includes the following:

  • Vector similarity search: The system uses this to identify potential detection matches for each extracted TTP.
  • LLM-based validation: The system uses this to minimize false positives and provide determinations of “likely covered” versus “likely gap” outcomes.

The vector similarity search process begins by standardizing all detections, including their metadata and code, during an offline preprocessing step. This information is stored in a relational database and includes details such as titles, descriptions, and MITRE ATT&CK mappings. In federated environments, detections may come from multiple repositories, so this standardization streamlines access during detection mapping. Selected fields are then used to build a vector database, enabling semantic search across detections.

Vector search uses approximate nearest neighbor algorithms and produces a similarity-based confidence score. Because setting effective thresholds for these scores can be challenging, the workflow includes a second validation step using an LLM. This step evaluates whether candidate mappings are valid for a given TTP using a tailored prompt.

The final output highlights prioritized detection opportunities and identifies potential gaps. These results are intended as recommendations that defenders should confirm based on their environment and available telemetry. Because the analysis relies on extracted text and metadata, which may be ambiguous, these mappings do not guarantee detection coverage. Organizations should supplement this approach with real-world simulations to further validate the results.

Human-in-the-loop: why validation remains essential

Final confirmation requires human expertise and empirical validation. The workflow identifies promising detection opportunities and potential gaps, but confirmation depends on testing with real telemetry, simulation, and review of detection logic in context.

This boundary is important because coverage in this approach is primarily based on text similarity and metadata alignment. A detection may exist but operate at a different scope, depend on telemetry that is not universally available, or require correlation across multiple data sources. The purpose of the workflow is to reduce time to initial analysis so experts can focus on high-value validation and implementation work.

Practical advice for using AI

Large language models are powerful for accelerating security analysis, but they can be inconsistent across runs, especially when prompts, context, or inputs vary. Output quality depends heavily on the prompt. Long prompts might not transmit intent effectively to the model.

1) Plan for inconsistency and make critical steps deterministic

For high-impact steps, such as TTP extraction or mapping behaviors to a taxonomy, prioritize stability over creativity:

  • Use stronger models for the most critical steps and reserve smaller or cheaper models for tasks like summarization or formatting. Reasoning models are often more effective than non-reasoning models.
  • Use structured outputs, such as JSON schemas, and explicit formatting requirements to reduce variance. Most state-of-the-art models now support structured output.
  • Include a self-critique or answer review step in the model output. Use sequential LLM calls or a multi-turn agentic workflow to ensure a satisfactory result.

2) Insert reviewer checkpoints where mistakes are costly

Even high-performing models can miss details in long or heterogeneous documents. To reduce the risk of omissions or incorrect mappings, add human-in-the-loop reviewer gates:

  • Reviewer checkpoints are especially valuable for final TTP lists and any “coverage vs. gap” conclusions.
  • Treat automated outputs as a first-pass hypothesis. Require expert validation and, if possible, empirical checks before operational decisions.

3) Optimize prompt context for better accuracy

Avoid including too much information in prompts. While modern models have large token windows, excess content can dilute relevance, increase cost, and reduce accuracy.

Best Practices:

  • Provide only the minimum necessary context. Focus on the information needed for the current step. Use RAG or staged, multi-step prompts instead of one large prompt.
  • Be specific. Use clear, direct instructions. Vague or open-ended requests often produce unclear results.

4) Build an evaluation loop

Establish an evaluation process for production-quality results:

  • Develop gold datasets and ground-truth samples to track coverage and accuracy over time.
  • Use expert reviews to validate results instead of relying on offline metrics.
  • Use evaluations to identify regressions when prompts, models, or context packaging changes.

Where AI accelerates detection and experts validate

Detection engineering is most effective when treated as a continuous loop:

  1. Gather new intelligence
  2. Extract relevant behaviors
  3. Check current coverage
  4. Set validation priorities
  5. Implementing improvements

AI can accelerate the early stages of this loop by quickly structuring TTPs and enabling efficient matching against existing detections. This allows defenders to focus on higher-value work, such as validating coverage, investigating areas of uncertainty, and refining detection logic.

In evaluation, the AI-assisted approach to TTP extraction produced results comparable to those of security experts. By combining the speed of AI with expert review and validation, organizations can scale detection coverage analysis more effectively, even during periods of high reporting volume.

This research is provided by Microsoft Defender Security Research with contributions from  Fatih Bulut.

References

  1. MITRE ATT&CK Framework: https://attack.mitre.org
  2. Fatih Bulut, Anjali Mangal. “Towards Autonomous Detection Engineering”. Annual Computer Security Applications Conference (ACSAC) 2025. Link: https://www.acsac.org/2025/files/web/acsac25-casestudy-bulut.pdf

The post Turning threat reports into detection insights with AI appeared first on Microsoft Security Blog.

]]>