Security operations Insights | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/topic/security-operations/ Expert coverage of cybersecurity topics Thu, 09 Apr 2026 18:34:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The agentic SOC—Rethinking SecOps for the next decade http://approjects.co.za/?big=en-us/security/blog/2026/04/09/the-agentic-soc-rethinking-secops-for-the-next-decade/ Thu, 09 Apr 2026 19:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=146282 In the SOC of the future, autonomous defense moves at machine speed, agents add context and coordination, and humans focus on judgment, risk, and outcomes.

The post The agentic SOC—Rethinking SecOps for the next decade appeared first on Microsoft Security Blog.

]]>
Every major shift in cyberattacker behavior over the past decade has followed a meaningful shift in how defenders operate. When security operation centers (SOCs) deployed endpoint detection and response (EDR)—and later extended detection and response (XDR)—security teams raised the bar, pushing cyberattackers beyond phishing, commodity malware, and perimeter‑based attacks and into cloud infrastructure built for scale and speed.

That pattern continued as defenders embraced automation and AI to manage expanding digital estates. SOCs were often early scale adopters—using machine learning to reduce noise, improve visibility, and respond faster across growing environments. Cyberattackers became more targeted and multistage, moving deliberately across identities, endpoints, cloud resources, and email, where detection was hardest. Success increasingly depended on moving fast enough to act before analysts could connect the dots. Even with this progress, security operations (SecOps) still feel asymmetrical: threat actors only need to be right once, while defenders are judged by every miss. If defense depends on human intervention to begin, defense will always feel asymmetrical.

To change the outcome, SOCs must change how defense itself works. This is the agentic SOC: where security delivers adaptive, autonomous defense, freeing defenders for strategic, high‑impact work. In this series, we’ll break down what that shift requires, what early experimentation has taught us, and where organizations can start today. Read more about how some organizations moving toward the agentic SOC and access a foundational roadmap for this transformation in our new whitepaper, The agentic SOC: Your teammate for tomorrow, today.

What we mean by “the agentic SOC”

At its core, the agentic SOC is an operating model that shifts security from reacting to incidents to anticipating how cyberattackers move—and actively reshaping the environment to cut off their paths.

It brings together a platform that can increasingly defend itself through built-in autonomous defense, with AI agents working alongside humans to accelerate investigation, prioritization, and action—so teams spend less time on execution and more time on judgment, risk, and the decisions that matter.

How does that change day-to-day work? Imagine a credential theft attempt. Built-in defenses automatically lock the affected account and isolate the compromised device within seconds—before lateral movement can begin. At the same time, an AI agent initiates an investigation, hunting for related activity across identity, endpoint, email, and cloud signals, and correlating everything into a single view.

When an analyst opens their queue, the “noise” of overwhelming alerts is already gone. Evidence has been pre-assembled. Likely next steps are suggested. The analyst can start right away by answering higher impact questions: Is this part of a broader campaign? Should this authentication method be hardened? Are there related techniques this cyberattacker commonly uses that the environment is still exposed to?

In today’s SOC, we see that sequence often takes hours—and the proactive improvement is very limited, if it ever happens; there’s simply not enough time. In an agentic SOC, it happens in minutes, and teams can spend the time they’ve gained on deeper investigation, systemic hardening, and reducing the likelihood of repeat cyberattacks.

A layered model for the agentic SOC

This model works because an agentic SOC is built on two distinct, but interdependent layers. The first is an underlying threat protection platform that has fundamentally evolved how cyberattacks are defended against and disrupted. High confidence cyberthreats are handled automatically through deterministic, policy-bound controls built directly into the platform. Known attack patterns are blocked in real time—without deliberation or creativity—shielding the environment from machine-speed cyberthreats before scarce human attention or token intensive reasoning is required. This disruption layer is not optional; it is the prerequisite that makes an agentic SOC safe, scalable, and sustainable.

The second layer operates at the operational level, where agents take on tough analysis and correlation work to dramatically increase the leverage of security teams and shift focus from uncovering insight to acting on it. These agents reason over evidence, coordinate investigations, orchestrate response across domains, and learn continuously from outcomes. Over time, they help identify recurring attack paths, surface gaps in posture, and recommend changes that make the environment harder to exploit—not just faster to respond.

Together, they transform the SOC from a reactive workflow engine into a resilient system.

What’s real now, and why there’s reason for optimism

The optimism around our view of the agentic SOC comes from operational discipline and proven, real-world impact. Autonomous attack disruption has been operating at scale for years.

Read more about how Microsoft Defender establishes confidence for automatic action.

Attacks like ransomware are disrupted in an average of three minutes, and tens of thousands of attacks are contained every month by isolating compromised users and devices before lateral movement can take hold. This all done with a 99.99% confidence rating, so SOC teams can trust in its efficacy.

Building on that proven foundation, newer capabilities like predictive shielding extend autonomous defense further—anticipating how cyberattacks are likely to progress and proactively restricting high-risk paths or assets during an intrusion.

Read the case study about how predictive shielding in Microsoft Defender stopped Group Policy Object (GPO) ransomware before it started

Together, these system-level protections show that platforms can safely intervene earlier in the cyberattack chain without introducing unnecessary disruption.

Agentic capabilities are also being similarly scoped. Internally, we’ve been testing task agents for triage and investigations under our expert supervision of our defenders. In live environments, these agents automate 75% of phishing and malware investigations. We’ve also tested agents on more complex analytical tasks, such as assessing exposure to specific vulnerabilities—work that once required a full day of engineering effort and can now be completed in less than an hour by an agent.

How day-to-day SOC work will change in the future

In an agentic SOC, the center of gravity will change for roles like an analyst. Fewer analysts are pulled into firefighting; more time is spent investigating how the organization is being targeted and what steps can be taken to reduce exposure. Within this new operating model, security teams will be freed to evolve the team structure and their day-to-day responsibilities.

Agentic systems increase demand for oversight, tuning, and governance. Detection and response engineering becomes more central, as teams design policies, confidence thresholds, and escalation paths. New roles emerge around supervising outcomes and refining system behavior over time.

Expertise becomes more valuable, not less. Judgment, context, and institutional knowledge are no longer consumed by repetitive tasks—they shape how the SOC operates at scale. And skilled practitioners closer to strategy, quality, and accountability.

To make this shift tangible, here’s how key roles are evolving:

  • Analysts: from triaging alerts to supervising outcomes. Analysts validate agent‑led investigations, determine when deeper inquiry is needed, focus on ambiguous cases, and guide system learning over time.
  • Detection engineers: from writing rules to teaching the system what matters. Engineers decide which signals are trustworthy, add the right context, and set confidence thresholds so detections can be acted on automatically—without human review every time.
  • Threat hunters: from manual queries to hypothesis-driven exploration. Hunters use AI to surface anomalies and focus on creative investigation and adversary simulation.
  • SOC leadership: from managing queues to orchestrating autonomy. Leaders define automation policies, oversee governance, and align AI actions with business risk.

Each shift reflects a broader truth: in the agentic SOC, people don’t do less—they do more of what matters.

The agentic SOC journey

This is a significant change in how security teams operate, and it doesn’t happen overnight. Based on our own experience, we’ve outlined a maturity model that shows how organizations can progress toward an agentic SOC over time.

Organizations begin by establishing a trusted foundation that unifies security tooling, enables the deployment of autonomous defense and begins unifying security signal in earnest. From there, they introduce agents to take on bounded, high-volume work under human supervision, learning where automation adds leverage and where judgment still matters most. Over time, as confidence, governance, and operational discipline mature, agents expand from assisting individual workflows to coordinating broader security outcomes. At every stage, progress is measured not by how much work is automated, but by how effectively human expertise is amplified.

A horizontal gradient graphic transitioning from blue to purple shows a three-stage SOC maturity journey connected by a curved line, with labeled milestones reading “SOC I: Unify your platform foundation,” “SOC II: Accelerate operations with generative AI,” and “SOC III: Deploy agentic automation.”

SOC 1—Unify your platform foundation

The shift begins with a unified security platform that enables autonomous defense. Deterministic, policy-bound protections stop high confidence cyberthreats automatically—removing urgency, reducing blast radius, and eliminating the constant context switching that slows human response. By integrating signals across identity, endpoints, and cloud, defenders gain a shared view of cyberattacks instead of stitching evidence together across tools. This foundation is what makes cross-domain action possible—and separates experimental automation from production-ready operations.

SOC 2—Accelerate operations with generative AI and task agents

With urgency reduced, generative AI changes how work flows through the SOC. Instead of pushing alerts forward, AI assembles context, synthesizes signals across domains, and produces coherent investigations. Repetitive, high-volume tasks like triage, correlation, and basic investigation are absorbed by the system, allowing analysts to focus on higher impact decisions. This stage establishes new operational patterns where humans and AI work together—accelerating response while preserving judgment and accountability.

SOC 3—Deploy agentic automation

As trust grows, agents move from assistance to action. Specialized agents autonomously orchestrate specific tasks—containing compromised identities, isolating devices, or remediating reported phishing—while humans shift into supervisory roles. Over time, agents help identify patterns, anticipate attack paths, and optimize defenses across the environment. Security teams spend less time managing queues and more time shaping posture, risk, and outcomes. These shifts compound across all three stages.

What comes next for the SOC evolution?

We believe the strongest agentic SOC models will begin with autonomous defense—deterministic, policy‑bound actions that safely stop what is already known to be dangerous at machine speed. That foundation removes urgency, noise, and latency from security operations.

Additionally, agents and humans work differently. Agents assemble context, coordinate remediation, and optimize how the SOC operates. Humans provide intent, judgment, and accountability—turning time saved into smarter, more strategic security outcomes.

This is the first of a series of posts that will explore what makes the agentic SOC model real: the platform foundations required to defend autonomously, the governance and trust mechanisms that keep autonomy safe, and the adoption journey organizations take to get there. Some organizations are already rebuilding their businesses around AI, a new class of Frontier Firms. Read more about how they’re making their move toward the agentic SOC and access a foundational roadmap for this transformation in our new whitepaper, The agentic SOC: Your teammate for tomorrow, today.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post The agentic SOC—Rethinking SecOps for the next decade appeared first on Microsoft Security Blog.

]]>
Help on the line: How a Microsoft Teams support call led to compromise http://approjects.co.za/?big=en-us/security/blog/2026/03/16/help-on-the-line-how-a-microsoft-teams-support-call-led-to-compromise/ Mon, 16 Mar 2026 16:00:00 +0000 A DART investigation into a Microsoft Teams voice phishing attack shows how deception and trusted tools can enable identity-led intrusions and how to stop them.

The post Help on the line: How a Microsoft Teams support call led to compromise appeared first on Microsoft Security Blog.

]]>
In our eighth Cyberattack Series report, Microsoft Incident Response—the Detection and Response Team (DART)—investigates a recent identity-first, human-operated intrusion that relied less on exploiting software vulnerabilities and more on deception and legitimate tools. After a customer reached out for assistance in November 2025, DART uncovered a campaign built on persistent Microsoft Teams voice phishing (vishing), where a threat actor impersonated IT support and targeted multiple employees. Following two failed attempts, the threat actor ultimately convinced a third user to grant remote access through Quick Assist, enabling the initial compromise of a corporate device.

This case highlights a growing class of cyberattacks that exploit trust, collaboration platforms, and built-in tooling, and underscores why defenders must be prepared to detect and disrupt these techniques before they escalate. Read the full report to dive deeper into this vishing breach of trust.

What happened?

Once remote interactive access was established, the threat actor shifted from social engineering to hands-on keyboard compromise, steering the user toward a malicious website under their control. Evidence gathered from browser history and Quick Assist artifacts showed the user was prompted to enter corporate credentials into a spoofed web form, which then initiated the download of multiple malicious payloads. One of the earliest artifacts—a disguised Microsoft Installer (MSI) package—used trusted Windows mechanisms to sideload a malicious dynamic link library (DLL) and establish outbound command-and-control, allowing the threat actor to execute code under the guise of legitimate software.

Subsequent payloads expanded this foothold, introducing encrypted loaders, remote command execution through standard administrative tooling, and proxy-based connectivity to obscure threat actor activity. Over time, additional components enabled credential harvesting and session hijacking, giving the threat actor sustained, interactive control within the environment and the ability to operate using techniques designed to blend in with normal enterprise activity rather than trigger overt alarms.

Trust is the weak point: Threat actors increasingly exploit trust—not just software flaws—using social engineering inside collaboration platforms to gain initial access.1

How did Microsoft respond?

Given the growing pattern of identity-first intrusions that begin with collaboration-based social engineering, DART moved quickly to contain risk and validate scope. The team confirmed that the compromise originated from a successful Microsoft Teams voice phishing interaction and immediately prioritized actions to prevent identity or directory-level impact. Through focused investigation, we established that the activity was short-lived and limited in reach, allowing responders to concentrate on early-stage tooling and entry points to understand how access was achieved and constrained.

To disrupt the intrusion, DART conducted targeted eviction and applied tactical containment controls to protect privileged assets and restrict lateral movement. Using proprietary forensic and investigation tooling, the team collected and analyzed evidence across affected systems, validated that threat actor objectives were not met, and confirmed the absence of persistence mechanisms. These actions enabled rapid recovery while helping to ensure the environment was fully secured before declaring the incident resolved.

What can customers do to strengthen their defenses?

Human nature works against us in these cyberattacks. Employees are conditioned to be responsive, helpful, and collaborative, especially when requests appear to come from internal IT or support teams. Threat actors exploit that instinct, using voice phishing and collaboration tools to create a sense of urgency and legitimacy that can override caution in the moment.

To mitigate exposure, DART recommends organizations take deliberate steps to limit how social engineering attacks can propagate through Microsoft Teams and how legitimate remote access tools can be misused. This starts with tightening external collaboration by restricting inbound communications from unmanaged Teams accounts and implementing an allowlist model that permits contact only from trusted external domains. At the same time, organizations should review their use of remote monitoring and management tools, inventory what is truly required, and remove or disable utilities—such as Quick Assist—where they are unnecessary.

Together, these measures help shrink the attack surface, reduce opportunities for identity-driven compromise, and make it harder for threat actors to turn human trust into initial access, while preserving the collaboration employees rely on to do their work.

What is the Cyberattack Series?

In our Cyberattack Series, customers discover how DART investigates unique and notable attacks. For each cyberattack story, we share:

  • How the cyberattack happened.
  • How the breach was discovered.
  • Microsoft’s investigation and eviction of the threat actor.
  • Strategies to avoid similar cyberattacks.

DART is made up of highly skilled investigators, researchers, engineers, and analysts who specialize in handling global security incidents. We’re here for customers with dedicated experts to work with you before, during, and after a cybersecurity incident.

Learn more

To learn more about DART capabilities, please visit our website, or reach out to your Microsoft account manager or Premier Support contact. To learn more about the cybersecurity incidents described above, including more insights and information on how to protect your own organization, download the full report.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2025.

The post Help on the line: How a Microsoft Teams support call led to compromise appeared first on Microsoft Security Blog.

]]>
From transparency to action: What the latest Microsoft email security benchmark reveals http://approjects.co.za/?big=en-us/security/blog/2026/03/12/from-transparency-to-action-what-the-latest-microsoft-email-security-benchmark-reveals/ Thu, 12 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145685 The latest Microsoft benchmarking data reveals how Microsoft Defender mitigates modern email threats compared to SEG and ICES vendors.

The post From transparency to action: What the latest Microsoft email security benchmark reveals appeared first on Microsoft Security Blog.

]]>
In our last benchmarking post, Clarity in complexity: New insights for transparent email security,1 we shared why transparency matters more than ever in email security and how clear, consistent benchmarking helps security teams cut through noise and make confident decisions.

Today, we’re continuing that conversation. With the latest Microsoft benchmarking data, we’re sharing what real-world telemetry reveals about how effectively modern email threats are detected, mitigated, and stopped by Microsoft Defender, secure email gateway (SEG) providers, and integrated cloud email security (ICES) solutions.

This is part of our ongoing commitment to openness: regularly publishing performance data so customers can see how protections perform at scale.

What’s new in the latest benchmarking data

The newest benchmarking results reflect updated telemetry across recent months and reinforce several consistent trends:

  • Microsoft Defender removes an average of 70.8% of malicious email post-delivery, helping reduce dwell time even when cyberthreats bypass initial filtering.
  • Layered protection matters. When Defender operates alongside ICES partners, organizations benefit from incremental detection gains across promotional, spam, and malicious messages.
  • Overlapping detections remain, meaning ICES solutions can flag the same messages and the incremental value-add can vary by scenario and email type.

This kind of data-driven visibility is critical for security teams who want to understand not just whether cyberthreats are blocked, but how and where defenses are adding value across the email attack lifecycle.

Benchmarking results for ICES vendors

Microsoft’s quarterly analysis shows that layering ICES solutions with Microsoft Defender continue to provide a benefit in reducing marketing and bulk email, improving their filtering by an average of 13.7%. This reduces inbox clutter and boosts user productivity in environments with high volumes of promotional email. For filtering of spam and malicious messages, the incremental gains remain modest, and the latest quarter shows a smaller uplift than the prior period—averaging 0.29% and 0.24% respectively, compared to 1.65% and 0.5% in the prior report.

Focusing only on malicious messages that reached the inbox, the latest quarter shows Microsoft Defender’s zero hour auto purge performing the majority of post‑delivery remediation—removing an average of 70.8% of these threats. ICES vendors provided additional post‑delivery filtering, contributing an average of 29.2%. Together, this highlights two points: post‑delivery remediation is a critical backstop when cyberthreats evade initial filtering, and in these results Microsoft Defender delivered most of the post‑delivery catch, while ICES vendors add incremental coverage in this scenario.

Benchmarking results for SEG vendors

For the SEG vendor benchmarking metrics, a cyberthreat was classified as “missed” if it was not detected prior to delivery. Using this definition, Microsoft Defender missed fewer high-severity cyberthreats than other solutions evaluated in the study, consistent with patterns observed in our prior benchmarking report.

Reinforcing our commitment to the ICES vendor ecosystem

Transparency doesn’t stop at Microsoft’s own detections. It also extends to how we work with partners.

When we introduced the Microsoft Defender for Office 365 ICES vendor ecosystem, our goal was clear: enable customers to integrate trusted, non-Microsoft email security solutions into a unified Defender experience, without fragmenting workflows or visibility.

That commitment continues today.

  • The ICES vendor ecosystem now includes four partners—Darktrace, KnowBe4, Cisco, and VIPRE Security Group—all integrated directly into Microsoft Defender across experiences such as Quarantine, Explorer, email entity pages, advanced hunting, and reporting.
  • Customers retain a single operational plane in the Defender portal, even when layering multiple email security technologies.
  • Integrations are deliberate and additive, designed to enhance protection and clarity without increasing operational complexity.
  • The ecosystem supports defense-in-depth strategies while preserving a single, coherent security experience.

The recent additions reinforce our belief that email security is strongest when it combines native platform intelligence with specialized partner capabilities, surfaced through a single pane of glass.

We continue to actively evaluate additional partnerships based on customer demand, detection quality, and the ability to deliver meaningful, differentiated signals.

Why this matters for security teams

Email remains one of the most targeted and exploited attack vectors, and modern campaigns rarely rely on a single technique or control gap.

By pairing transparent benchmarking with integrated, multi-vendor protection, security teams gain:

  • Clear insight into detection coverage across native and partner solutions.
  • Reduced investigation friction with unified views and workflows.
  • Confidence in layered defenses, backed by regularly published data.

This isn’t about claiming perfection. It’s about showing the work, sharing the numbers, and giving customers the information they need to make informed security decisions.

Looking ahead

We’ll continue to publish updated benchmarking insights on a regular basis, alongside ongoing investments in Microsoft Defender and the ICES vendor ecosystem.

To explore the latest benchmarking data and learn more about how Defender and ICES partners work together, access the benchmarking site.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Clarity in complexity: New insights for transparent email security, Microsoft. December 10, 2025.

The post From transparency to action: What the latest Microsoft email security benchmark reveals appeared first on Microsoft Security Blog.

]]>
Secure agentic AI for your Frontier Transformation http://approjects.co.za/?big=en-us/security/blog/2026/03/09/secure-agentic-ai-for-your-frontier-transformation/ Mon, 09 Mar 2026 13:00:00 +0000 We are announcing the next step to make Frontier Transformation real for customers across every industry with Wave 3 of Microsoft 365 Copilot, Microsoft Agent 365, and Microsoft 365 E7: The Frontier Suite.

The post Secure agentic AI for your Frontier Transformation appeared first on Microsoft Security Blog.

]]>
Today we shared the next step to make Frontier Transformation real for customers across every industry with Wave 3 of Microsoft 365 Copilot, Microsoft Agent 365, and Microsoft 365 E7: The Frontier Suite.

As our customers rapidly embrace agentic AI, chief information officers (CIOs), chief information security officers (CISOs), and security decision makers are asking urgent questions: How do I track and monitor all these agents? How do I know what they are doing? Do they have the right access? Can they leak sensitive data? Are they protected from cyberthreats? How do I govern them?

Agent 365 and Microsoft 365 E7: The Frontier Suite, generally available on May 1, 2026, are designed to help answer these questions and give organizations the confidence to go further with AI.

Agent 365—the control plane for agents

As organizations adopt agentic AI, growing visibility and security gaps can increase the risk of agents becoming double agents. Without a unified control plane, IT, security, and business teams lack visibility into which agents exist, how they behave, who has access to them, and what potential security risks exist across the enterprise. With Microsoft Agent 365 you now have a unified control plane for agents that enables IT, security, and business teams to work together to observe, govern, and secure agents across your organization—including agents built with Microsoft AI platforms and agents from our ecosystem partners—using new Microsoft Security capabilities built into their existing flow of work.

Here is what that looks like in practice:

As we are now running Agent 365 in production, Avanade has real visibility into agent activity, the ability to govern agent sprawl, control resource usage, and manage agents as identity-aware digital entities in Microsoft Entra. This significantly reduces operational and security risk, represents a critical step forward in operationalizing the agent lifecycle at scale, and underscores Microsoft’s commitment to responsible, production-ready AI.

—Aaron Reich, Chief Technology and Information Officer, Avanade

Key Agent 365 capabilities include:

Observability for every role

With Agent 365, IT, security, and business teams gain visibility into all Agent 365 managed agents in their environment, understand how they are used, and can act quickly on performance, behavior, and risk signals relevant to their role—from within existing tools and workflows.

  • Agent Registry provides an inventory of agents in your organization, including agents built with Microsoft AI platforms, ecosystem partner agents, and agents registered through APIs. This agent inventory is available to IT teams in the Microsoft 365 admin center. Security teams see the same unified agent inventory in their existing Microsoft Defender and Purview workflows.
  • Agent behavior and performance observability provides detailed reports about agent performance, adoption and usage metrics, an agent map, and activity details.
  • Agent risk signals across Microsoft Defender*, Entra, and Purview* help security teams evaluate agent risk—just like they do for users—and block agent actions based on agent compromise, sign-in anomalies, and risky data interactions. Defender assesses risk of agent compromise, Entra evaluates identity risk, and Purview evaluates insider risk. IT also has visibility into these risks in the Microsoft 365 admin center.
  • Security policy templates, starting with Microsoft Entra, automate collaboration between IT and security. They enable security teams to define tenant-wide security policies that IT leaders can then enforce in the Microsoft 365 admin center as they onboard new agents.

*These capabilities are in public preview and will continue to be on May 1.

Secure and govern agent access

Unmanaged agents may create significant risk, from accessing resources unchecked to accumulating excessive privileges and being misused by malicious actors. With Microsoft Entra capabilities included in Agent 365, you can secure agent identities and their access to resources.

  • Agent ID gives each agent a unique identity in Microsoft Entra, designed specifically for the needs of agents. With Agent ID, organizations can apply trusted access policies at scale, reduce gaps from unmanaged identities, and keep agent access aligned to existing organizational controls.
  • Identity Protection and Conditional Access for agents extend existing user policies that make real-time access decisions based on risks, device compliance from Microsoft Intune, and custom security attributes to agents working on behalf of a user. These policies help prevent compromise and help ensure that agents cannot be misused by malicious actors.
  • Identity Governance for agents enables identity leaders to limit agent access to only resources they need, with access packages that can be scoped to a subset of the users permissions, and includes the ability to audit access granted to agents.

Prevent data oversharing and ensure agent compliance

Microsoft Purview capabilities in Agent 365 provide comprehensive data security and compliance coverage for agents. You can protect agents from accessing sensitive data, prevent data leaks from risky insiders, and help ensure agents process data responsibly to support compliance with global regulations.

  • Data Security Posture Management provides visibility and insights into data risks for agents so data security admins can proactively mitigate those risks.
  • Information Protection helps ensure that agents inherit and honor Microsoft 365 data sensitivity labels so that they follow the same rules as users for handling sensitive data to prevent agent-led sensitive data leaks.
  • Inline Data Loss Prevention (DLP) for prompts to Microsoft Copilot Studio agents blocks sensitive information such as personally identifiable information, credit card numbers, and custom sensitive information types (SITs) from being processed in the runtime.
  • Insider Risk Management extends insider risk protection to agents to help ensure that risky agent interactions with sensitive data are blocked and flagged to data security admins.
  • Data Lifecycle Management enables data retention and deletion policies for prompts and agent-generated data so you can manage risk and liability by keeping the data that you need and deleting what you don’t.  
  • Audit and eDiscovery extend core compliance and records management capabilities to agents, treating AI agents as auditable entities alongside users and applications. This will help ensure that organizations can audit, investigate, and defensibly manage AI agent activity across the enterprise.
  • Communication Compliance extends to agent interactions to detect and enable human oversight of risky AI communications. This enables business leaders to extend their code of conduct and data compliance policies to AI communications.

Defend agents against emerging cyberthreats

To help you stay ahead of emerging cyberthreats, Agent 365 includes Microsoft Defender protections purpose-built to detect and mitigate specific AI vulnerabilities and threats such as prompt manipulation, model tampering, and agent-based attack chains.

  • Security posture management for Microsoft Foundry and Copilot Studio agents* detects misconfigurations and vulnerabilities in agents so security leaders can stay ahead of malicious actors by proactively resolving them before they become an attack vector.
  • Detection, investigation, and response for Foundry and Copilot Studio agents* enables the investigation and remediation of attacks that target agents and helps ensure that agents are accounted for in security investigations.
  • Runtime threat protection, investigation, and hunting** for agents that use the Agent 365 tools gateway, helps organizations detect, block, and investigate malicious agent activities.

Agent 365 will be generally available on May 1, 2026, and priced at $15 per user per month. Learn more about Agent 365.

*These capabilities are in public preview and will continue to be on May 1.

**This new capability will enter public preview in April 2026 and continue to be on May 1.

Microsoft 365 E7: The Frontier Suite

Microsoft 365 E7 brings together intelligence and trust to enable organizations to accelerate Frontier Transformation, equipping employees with AI across email, documents, meetings, spreadsheets, and business application surfaces. It also gives IT and security leaders the observability and governance needed to operate AI at enterprise scale.

Microsoft 365 E7 includes Microsoft 365 Copilot, Agent 365, Microsoft Entra Suite, and Microsoft 365 E5 with advanced Defender, Entra, Intune, and Purview security capabilities to help secure users, delivering comprehensive protection across users and agents. It will be available for purchase on May 1, 2026, at a retail price of $99 per user per month. Learn more about Microsoft 365 E7.

End-to-end security for the agentic era

Frontier Transformation is anchored in intelligence and trust, and trust starts with security. Microsoft Security capabilities help protect 1.6 million customers at the speed and scale of AI.1 With Agent 365, we are extending these enterprise-grade capabilities so organizations can observe, secure, and govern agents and delivering comprehensive protection across agents and users with Microsoft 365 E7.

Secure your Frontier Transformation today with Agent 365 and Microsoft 365 E7: The Frontier Suite. And join us at RSAC Conference 2026 to learn more about these new solutions and hear from industry experts and customers who are shaping how agents can be observed, governed, secured, and trusted in the real world.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Fiscal Year 2026 Second Quarter Earnings Conference Call.

The post Secure agentic AI for your Frontier Transformation appeared first on Microsoft Security Blog.

]]>
Women’s History Month: Encouraging women in cybersecurity at every career stage http://approjects.co.za/?big=en-us/security/blog/2026/03/05/womens-history-month-encouraging-women-in-cybersecurity-at-every-career-stage/ Thu, 05 Mar 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145412 This Women’s History Month, we explore ways to support the next generation of female defenders at every career stage.

The post Women’s History Month: Encouraging women in cybersecurity at every career stage appeared first on Microsoft Security Blog.

]]>
Women’s History Month—and International Women’s Day on March 8, 2026—always gives me pause for reflection. It’s a moment to think about how far we’ve come and think about who we choose to uplift as we look ahead.

Throughout my career, I’ve been inspired by extraordinary women leaders—trailblazers who broke barriers, opened doors, and reshaped what leadership in technology looks like. But today, I want to shine a light on another group that inspires me just as deeply: women early in their careers—the builders, learners, and question-askers who are defining the future of cybersecurity and developing their skills in the era of AI.

These women are entering the field at a moment of unprecedented complexity. Cyberthreats are accelerating. AI is reshaping how we defend, detect, and respond. And the stakes—for trust, safety, and resilience—have never been higher.

That’s exactly why it has never been more critical to have a wide range of experiences and perspectives in our defender community.

Be Cybersmart

Help educate everyone in your organization with cybersecurity awareness resources and training curated by the security experts at Microsoft.

Get the Be Cybersmart Kit.

Why diversity of perspectives is not optional in cybersecurity

Cybersecurity is fundamentally about understanding people—how they behave, how they make decisions, how systems can be misused, and where harm can occur. That’s why diversity of perspectives, backgrounds, experiences, and people is a security imperative.

The ISACA paper titled “The Value of Diversity and Inclusion in Cybersecurity” concludes that cybersecurity teams lacking diversity are at greater risk of engaging in limited threat modeling, exhibiting reduced innovation, and making less robust decisions in complex security environments. At Microsoft Security, we recognize that the cyberthreats we encounter are as varied and multifaceted as humanity itself.

To stay ahead, our teams must reflect that diversity across gender, background, culture, discipline, and lived experience.

When teams bring different perspectives to the table,

  • They ask better questions;
  • They surface risks earlier;
  • They design systems that work for more people;
  • And they build security that is resilient by design.

The power of women early in career and beyond

Women early in their career bring something incredibly powerful to cybersecurity and AI: fresh perspective paired with fearless curiosity. Women bring empathy, clarity, systems thinking, and collaborative leadership that directly strengthen our ability to detect cyberthreats, understand human behavior, and build secure products that work for everyone.

This makes me think of my valued friend and colleague, Lauren Buitta, who is the founder and chief executive officer (CEO) of Girl Security. Lauren has been a tireless advocate for providing women early in career—especially those from underrepresented backgrounds, with the skills and confidence needed to enter security careers. She often says, “Security isn’t just a discipline—it’s empowerment through knowledge.” That philosophy extends to Girl Security’s work preparing the next generation to navigate and lead in an AI-powered world. Her efforts show us that nurturing curiosity early on can have lasting effects throughout life.

They challenge assumptions that may no longer hold. They ask “why” before accepting “how.” They’re often the first to notice gaps—in data, in design, in who is represented and who is missing. Supporting women at this stage isn’t just about equity. It’s about strengthening the future of security itself. These actions build a stronger, more resilient security ecosystem.

Building and cultivating pathways for the next generation

Investing in women early in their cybersecurity and AI security careers is essential. Early access to education, opportunity, and confidence building experiences helps more women see themselves in this field—and choose to stay.

But if we stop there, we shouldn’t be surprised when the numbers don’t move.  In fact, independent global analyses from the Global Cybersecurity Forum and Boston Consulting Group show that women represent just 24% of the cybersecurity workforce worldwide—a figure reinforced by LinkedIn’s real-time labor market data. What I’ve realized is this: To change outcomes, we have to cultivate women throughout their careers—from first exposure to technical mastery, from early roles to leadership, and from individual contributor to decisionmaker. Otherwise, we’ll continue to bring women into the field without creating the conditions that allow them to grow, advance, and remain.

That means pairing early career investment with sustained support, inclusive cultures, and everyday actions that reinforce belonging and opportunity over time.

Here are meaningful steps we can all take—not just to widen the pipeline, but to strengthen it end to end:

1. Share stories from a diverse set of role models at every career stage.
Representation fuels imagination. When women early in career see themselves reflected in cybersecurity, they’re more likely to enter the field. When women midcareer and in senior roles see paths forward, they’re more likely to stay and lead.

2. Reevaluate job descriptions at entry and beyond.
Rigid expectations or narrow definitions of technical expertise discourage qualified candidates from applying, and can also limit progression into advanced or leadership roles.

3. Invest in inclusive training and early career programs and sustain learning over time.
Accessible, hands-on learning builds confidence early. Continued upskilling, reskilling, and leadership development ensure women can evolve alongside rapidly changing security and AI technologies.

4. Volunteer with organizations driving cybersecurity and AI education.
Groups like Girl Security and Women in CyberSecurity (WiCyS) are changing outcomes for thousands of girls and women. Your time, mentorship, or sponsorship helps build momentum early—and reinforces pathways later. I welcome you to join Nicole Ford, Vice President Customer Security Officer at Microsoft, who will be hosting a leadership lunch at the WiCyS conference to discuss cultivating leaders for the future and though advocacy and sponsorship.

5. Partner with community groups offering mentorship and sponsorship opportunities.
Mentorship is one of the strongest predictors of early career success. Sponsorship—advocacy that opens doors to stretch roles, visibility, and advancement—is critical for long term progression.

6. Be an ally every day across the full career journey.
Introduce emerging talent to your networks. Encourage them to speak up. Create space for them to lead. Advocate for their ideas in rooms they aren’t in yet—especially as stakes and visibility increase.

Our commitment—and our opportunity

At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. That starts by ensuring the next generation of cybersecurity and AI security professionals has equitable access to opportunity, education, and belonging.

This Women’s History Month, let’s celebrate not only the women who have led the way — but the women who are just getting started.

They’re actively shaping security today, not just influencing its future. Security is a team sport and we need everyone in this team because together, we can build a safer, more inclusive digital future for all.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Women’s History Month: Encouraging women in cybersecurity at every career stage appeared first on Microsoft Security Blog.

]]>
Threat modeling AI applications http://approjects.co.za/?big=en-us/security/blog/2026/02/26/threat-modeling-ai-applications/ Thu, 26 Feb 2026 17:04:08 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145401 AI threat modeling helps teams identify misuse, emergent risk, and failure modes in probabilistic and agentic AI systems.

The post Threat modeling AI applications appeared first on Microsoft Security Blog.

]]>
Proactively identifying, assessing, and addressing risk in AI systems

We cannot anticipate every misuse or emergent behavior in AI systems. We can, however, identify what can go wrong, assess how bad it could be, and design systems that help reduce the likelihood or impact of those failure modes. That is the role of threat modeling: a structured way to identify, analyze, and prioritize risks early so teams can prepare for and limit the impact of real‑world failures or adversarial exploits.

Traditional threat modeling evolved around deterministic software: known code paths, predictable inputs and outputs, and relatively stable failure modes. AI systems (especially generative and agentic systems) break many of those assumptions. As a result, threat modeling must be adapted to a fundamentally different risk profile.

Why AI changes threat modeling

Generative AI systems are probabilistic and operate over a highly complex input space. The same input can produce different outputs across executions, and meaning can vary widely based on language, context, and culture. As a result, AI systems require reasoning about ranges of likely behavior, including rare but high‑impact outcomes, rather than a single predictable execution path.

This complexity is amplified by uneven input coverage and resourcing. Models perform differently across languages, dialects, cultural contexts, and modalities, particularly in low‑resourced settings. These gaps make behavior harder to predict and test, and they matter even in the absence of malicious intent. For threat modeling teams, this means reasoning not only about adversarial inputs, but also about where limitations in training data or understanding may surface failures unexpectedly.

Against this backdrop, AI introduces a fundamental shift in how inputs influence system behavior. Traditional software treats untrusted input as data. AI systems treat conversation and instruction as part of a single input stream, where text—including adversarial text—can be interpreted as executable intent. This behavior extends beyond text: multimodal models jointly interpret images and audio as inputs that can influence intent and outcomes.

As AI systems act on this interpreted intent, external inputs can directly influence model behavior, tool use, and downstream actions. This creates new attack surfaces that do not map cleanly to classic threat models, reshaping the AI risk landscape.

Three characteristics drive this shift:

  • Nondeterminism: AI systems require reasoning about ranges of behavior rather than single outcomes, including rare but severe failures.
  • Instruction‑following bias: Models are optimized to be helpful and compliant, making prompt injection, coercion, and manipulation easier when data and instructions are blended by default.
  • System expansion through tools and memory: Agentic systems can invoke APIs, persist state, and trigger workflows autonomously, allowing failures to compound rapidly across components.

Together, these factors introduce familiar risks in unfamiliar forms: prompt injection and indirect prompt injection via external data, misuse of tools, privilege escalation through chaining, silent data exfiltration, and confidently wrong outputs treated as fact.

AI systems also surface human‑centered risks that traditional threat models often overlook, including erosion of trust, overreliance on incorrect outputs, reinforcement of bias, and harm caused by persuasive but wrong responses. Effective AI threat modeling must treat these risks as first‑class concerns, alongside technical and security failures.

Differences in Threat Modeling: Traditional vs. AI Systems
CategoryTraditional SystemsAI Systems
Types of ThreatsFocus on preventing data breaches, malware, and unauthorized access.Includes traditional risks, but also AI-specific risks like adversarial attacks, model theft, and data poisoning.
Data SensitivityFocus on protecting data in storage and transit (confidentiality, integrity).In addition to protecting data, focus on data quality and integrity since flawed data can impact AI decisions.
System BehaviorDeterministic behavior—follows set rules and logic.Adaptive and evolving behavior—AI learns from data, making it less predictable.
Risks of Harmful OutputsRisks are limited to system downtime, unauthorized access, or data corruption.AI can generate harmful content, like biased outputs, misinformation, or even offensive language.
Attack SurfacesFocuses on software, network, and hardware vulnerabilities.Expanded attack surface includes AI models themselves—risk of adversarial inputs, model inversion, and tampering.
Mitigation StrategiesUses encryption, patching, and secure coding practices.Requires traditional methods plus new techniques like adversarial testing, bias detection, and continuous validation.
Transparency and ExplainabilityLogs, audits, and monitoring provide transparency for system decisions.AI often functions like a “black box”—explainability tools are needed to understand and trust AI decisions.
Safety and EthicsSafety concerns are generally limited to system failures or outages.Ethical concerns include harmful AI outputs, safety risks (e.g., self-driving cars), and fairness in AI decisions.

Start with assets, not attacks

Effective threat modeling begins by being explicit about what you are protecting. In AI systems, assets extend well beyond databases and credentials.

Common assets include:

  • User safety, especially when systems generate guidance that may influence actions.
  • User trust in system outputs and behavior.
  • Privacy and security of sensitive user and business data.
  • Integrity of instructions, prompts, and contextual data.
  • Integrity of agent actions and downstream effects.

Teams often under-protect abstract assets like trust or correctness, even though failures here cause the most lasting damage. Being explicit about assets also forces hard questions: What actions should this system never take? Some risks are unacceptable regardless of potential benefit, and threat modeling should surface those boundaries early.

Understand the system you’re actually building

Threat modeling only works when grounded in the system as it truly operates, not the simplified version of design docs.

For AI systems, this means understanding:

  • How users actually interact with the system.
  • How prompts, memory, and context are assembled and transformed.
  • Which external data sources are ingested, and under what trust assumptions.
  • What tools or APIs the system can invoke.
  • Whether actions are reactive or autonomous.
  • Where human approval is required and how it is enforced.

In AI systems, the prompt assembly pipeline is a first-class security boundary. Context retrieval, transformation, persistence, and reuse are where trust assumptions quietly accumulate. Many teams find that AI systems are more likely to fail in the gaps between components — where intent and control are implicit rather than enforced — than at their most obvious boundaries.

Model misuse and accidents 

AI systems are attractive targets because they are flexible and easy to abuse. Threat modeling has always focused on motivated adversaries:

  • Who is the adversary?
  • What are they trying to achieve?
  • How could the system help them (intentionally or not)?

Examples include extracting sensitive data through crafted prompts, coercing agents into misusing tools, triggering high-impact actions via indirect inputs, or manipulating outputs to mislead downstream users.

With AI systems, threat modeling must also account for accidental misuse—failures that emerge without malicious intent but still cause real harm. Common patterns include:

  • Overestimation of Intelligence: Users may assume AI systems are more capable, accurate, or reliable than they are, treating outputs as expert judgment rather than probabilistic responses.
  • Unintended Use: Users may apply AI outputs outside the context they were designed for, or assume safeguards exist where they do not.
  • Overreliance: When users accept incorrect or incomplete AI outputs, typically because AI system design makes it difficult to spot errors.

Every boundary where external data can influence prompts, memory, or actions should be treated as high-risk by default. If a feature cannot be defended without unacceptable stakeholder harm, that is a signal to rethink the feature, not to accept the risk by default.

Use impact to determine priority, and likelihood to shape response

Not all failures are equal. Some are rare but catastrophic; others are frequent but contained. For AI systems operating at a massive scale, even low‑likelihood events can surface in real deployments.

Historically risk management multiplies impact by likelihood to prioritize risks. This doesn’t work for massively scaled systems. A behavior that occurs once in a million interactions may occur thousands of times per day in global deployment. Multiplying high impact by low likelihood often creates false comfort and pressure to dismiss severe risks as “unlikely.” That is a warning sign to look more closely at the threat, not justification to look away from it.

A more useful framing separates prioritization from response:

  • Impact drives priority: High-severity risks demand attention regardless of frequency.
  • Likelihood shapes response: Rare but severe failures may rely on manual escalation and human review; frequent failures require automated, scalable controls.
Figure 1 Impact, Likelihood, and Mitigation by Alyssa Ofstein.

Every identified threat needs an explicit response plan. “Low likelihood” is not a stopping point, especially in probabilistic systems where drift and compounding effects are expected.

Design mitigations into the architecture

AI behavior emerges from interactions between models, data, tools, and users. Effective mitigations must be architectural, designed to constrain failure rather than react to it.

Common architectural mitigations include:

  • Clear separation between system instructions and untrusted content.
  • Explicit marking or encoding of untrusted external data.
  • Least-privilege access to tools and actions.
  • Allow lists for retrieval and external calls.
  • Human-in-the-loop approval for high-risk or irreversible actions.
  • Validation and redaction of outputs before data leaves the system.

These controls assume the model may misunderstand intent. Whereas traditional threat modeling assumes that risks can be 100% mitigated, AI threat modeling focuses on limiting blast radius rather than enforcing perfect behavior. Residual risk for AI systems is not a failure of engineering; it is an expected property of non-determinism. Threat modeling helps teams manage that risk deliberately, through defense in depth and layered controls.

Detection, observability, and response

Threat modeling does not end at prevention. In complex AI systems, some failures are inevitable, and visibility often determines whether incidents are contained or systemic.

Strong observability enables:

  • Detection of misuse or anomalous behavior.
  • Attribution to specific inputs, agents, tools, or data sources.
  • Accountability through traceable, reviewable actions.
  • Learning from real-world behavior rather than assumptions.

In practice, systems need logging of prompts and context, clear attribution of actions, signals when untrusted data influences outputs, and audit trails that support forensic analysis. This observability turns AI behavior from something teams hope is safe into something they can verify, debug, and improve over time.

 Response mechanisms build on this foundation. Some classes of abuse or failure can be handled automatically, such as rate limiting, access revocation, or feature disablement. Others require human judgment, particularly when user impact or safety is involved. What matters most is that response paths are designed intentionally, not improvised under pressure.

Threat modeling as an ongoing discipline

AI threat modeling is not a specialized activity reserved for security teams. It is a shared responsibility across engineering, product, and design.

The most resilient systems are built by teams that treat threat modeling as one part of a continuous design discipline — shaping architecture, constraining ambition, and keeping human impact in view. As AI systems become more autonomous and embedded in real workflows, the cost of getting this wrong increases.

Get started with AI threat modeling by doing three things:

  1. Map where untrusted data enters your system.
  2. Set clear “never do” boundaries.
  3. Design detection and response for failures at scale.

As AI systems and threats change, these practices should be reviewed often, not just once. Thoughtful threat modeling, applied early and revisited often, remains an important tool for building AI systems that better earn and maintain trust over time

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Threat modeling AI applications appeared first on Microsoft Security Blog.

]]>
Scaling security operations with Microsoft Defender autonomous defense and expert-led services http://approjects.co.za/?big=en-us/security/blog/2026/02/24/scaling-security-operations-with-microsoft-defender-autonomous-defense-and-expert-led-services/ Tue, 24 Feb 2026 13:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145004 AI-powered cyberattacks outpace aging SOC tools, and this new guide explains why manual defense fails and how autonomous, expert-led security transforms modern protection.

The post Scaling security operations with Microsoft Defender autonomous defense and expert-led services appeared first on Microsoft Security Blog.

]]>
Today’s security leaders are operating in an environment of truncated cyberattack timelines with aging defenses built for slower, linear cyberthreats that can no longer keep pace with advanced cyberthreats. AI-powered threat actors now use social engineering and malware that adapt in real time, allowing a single phishing message to escalate into a multidomain compromise within minutes. In many organizations, however, the bigger challenge lies closer to home: Years of accumulated technical debt inside the security operations center (SOC) and best-of-breed security investments have left many teams grappling with stitched together siloed tools, each producing fragments of insight that analysts must manually piece together. They’re also struggling with closing the skills gap and finding the right expertise.

The new e-book, Unlocking Microsoft Defender: A guide to autonomous defense and expert-led security, explores why this model has become unsustainable and how organizations can shift to a more integrated approach to modern defense. Implementing genuine SOC transformation is no easy task, and many organizations seek outside expertise to affect real change. Sign up to download the e-book now and learn more about topics like how autonomous defense paired with human judgment can help organizations tackle today’s toughest cyberthreats, and how adding services from Microsoft Security Experts can help defend against threats, build cyber resilience, and modernize security operations.

WASTED EFFORT: 20% of an analyst’s week—one full workday in five—is lost to manual toil.1

Why autonomous defense is now the standard

To keep pace with this new class of threat actor, security teams need to move beyond incremental automation and fundamentally rethink how defense operates. For years, SOCs have relied on manual triage—analysts chasing large volumes of low confidence alerts across disconnected tools. Security orchestration, automation, and response (SOAR) platforms improved efficiency by automating known responses, but they remain reactive by design, engaging only after an incident has already taken shape. This model struggles when attacks unfold in minutes, not days.

ALERT OVERLOAD: 42% of alerts go uninvestigated simply due to capacity constraints.1

The next evolution is an agentic SOC—one where defense is driven by continuous signal correlation, automated decision making, and human expertise applied where it matters most. Microsoft Defender XDR provides a unified operational layer across domains, closing visibility gaps created by siloed tools and enabling automated disruption of complex attacks before they escalate. By shifting routine investigation and response to AI-powered agents, security teams can reduce response time, contain cyberthreats earlier, and refocus human effort on proactive hunting, strategic analysis, and resilience rather than constant firefighting.

The blueprint for autonomous defense

The shift toward autonomous defense starts with unifying how security operations work. Fragmented tools force teams to interpret cyberthreats one signal at a time, leaving context scattered and response uneven. The guide explores how coordinated defense brings threat signals and protection actions together, revealing patterns that individual alerts may never reveal on their own. Instead of adjudicating noise, teams gain clear attack narratives that support faster, more confident decisions.

Autonomous defense builds on that foundation by using AI to act early in the attack lifecycle—not after damage is done. The e-book examines how modern platforms can contain in-progress threats and anticipate attacker movement, reducing reliance on manual escalation and static response models. The result is a SOC that spends less time reacting to incidents and more time shaping security outcomes—an operating model designed for speed, scale, and the inevitability of attack.

See how Microsoft Security Experts uncover fake remote workers

In the e‑book, we explore how autonomous defense is most effective when paired with human judgment and deep experience managing real incidents. Automated protection serves as the foundational security layer, blocking cyberthreats at machine speed, and reducing operational strain. When cyberattacks evolve or escalate, expert‑led hunting and managed detection and response bring global threat intelligence and real‑world insight to contain incidents and strengthen defenses. Human insights feed back into the platform, continuously improving automated protections and sharpening the organization’s overall security posture. In this video, we share a story of how fake profiles and fabricated identities can sometimes appear all too real.

Turn autonomous defense into resilient security

The e-book includes information about how organizations layer expertise at every stage of modern defense—combining autonomous protection with continuous human insight. Microsoft Security Experts helps in three key ways: with technical advisory to help modernize security operations, managed extended detection and response for around the clock defense against cyberthreats, and incident response and planning to build cyber resilience. The e-book further explains how this model emphasizes earlier threat discovery, reduced noise, and faster, more confident decision‑making as part of day‑to‑day security operations.

Sign up to download the e-book and read about how intelligence‑led incident response and direct access to security advisors can help organizations build long‑term resilience—not just recover from individual incidents. With expert guidance on readiness, response, and platform optimization, security teams can modernize operations, reduce integration overhead, and measurably improve outcomes. The result is a more resilient security program—one that resolves cyberthreats faster, lowers breach risk, consolidates cost, and enables teams to focus on solving meaningful security problems rather than chasing alerts.

Learn more about the Microsoft Defender Experts Suite

As security teams confront faster, more complex cyberattacks—and persistent gaps in skills and capacity—many are looking for practical ways to strengthen defenses without adding operational strain. The Microsoft Defender Experts Suite provides expert‑led security services to help organizations defend against advanced cyberthreats, improve resilience, and modernize security operations. If you’re exploring how to combine autonomous protection with continuous human expertise, read the full announcement for deeper context on what’s new and how these services work together.

Learn more

Learn more about Microsoft Security Experts and Microsoft Defender XDR.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 


1Microsoft and Omdia, State of the SOC: Unify Now or Pay Later report, 2026.

The post Scaling security operations with Microsoft Defender autonomous defense and expert-led services appeared first on Microsoft Security Blog.

]]>
Unify now or pay later: New research exposes the operational cost of a fragmented SOC http://approjects.co.za/?big=en-us/security/blog/2026/02/17/unify-now-or-pay-later-new-research-exposes-the-operational-cost-of-a-fragmented-soc/ Tue, 17 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145254 New research from Microsoft and Omdia reveals how fragmented tools, manual workflows, and alert overload are pushing SOCs to a breaking point.

The post Unify now or pay later: New research exposes the operational cost of a fragmented SOC appeared first on Microsoft Security Blog.

]]>
Security operations are entering a pivotal moment: the operating model that grew around network logs and phishing emails is now buckling under tool sprawl, manual triage, and threat actors that outpace defender capacity. New research from Microsoft and Omdia shows just how heavy the burden can be—security operations centers (SOCs) juggle double-digit consoles, teams manually ingest data several times a week, and nearly half of all alerts go uninvestigated. The result is a growing gap between cyberattacker speed and defender capacity. Read State of the SOC—Unify Now or Pay Later to learn how hidden operational pressures impact resilience—compelling evidence to why unification, automation, and AI-powered workflows are quickly becoming non-negotiables for modern SOC performance.

The forces pushing modern SOC operations to a breaking point

The report surfaces five specific operational pressures shaping the modern SOC—spanning fragmentation, manual toil, signal overload, business-level risk exposure, and detection bias. Separately, each data point is striking. But taken together, they reveal a more consequential reality: analysts spend their time stitching context across consoles and working through endless queues, while real cyberattacks move in parallel. When investigations stall and alerts go untriaged, missed signals don’t just hurt metrics—they create the conditions for preventable compromises. Let’s take a closer look at each of the five issues:

1. Fragmentation

Fragmented tools and disconnected data force analysts to pivot across an average of 10.9 consoles1 and manually reconstruct context, slowing investigations and increasing the likelihood of missed signals. These gaps compound when only about 59% of tools push data to the security information and event management (SIEM), leaving most SOCs manually ingesting data and operating with incomplete visibility.

2. Manual toil

Manual, repetitive data work consumes an outsized share of analyst capacity, with 66% of SOCs losing 20% of their week to aggregation and correlation—an operational drain that delays investigations, suppresses threat hunting, and weakens the SOC’s ability to reduce real risk.

3. Security signal overload

Surging alert volumes bury analysts in noise with an estimated 46% of alerts proving false positives and 42% going uninvestigated, overwhelming capacity, driving fatigue, and increasing the likelihood real cyberthreats slip through unnoticed.

4. Operational gaps

Operational gaps are directly translating into business disrupting incidents, with 91% of security leaders reporting serious events and more than half experiencing five or more in the past year—exposing organizations to financial loss, downtime, and reputational damage.

5. Detection bias

Detection bias keeps SOCs focused on tuning alerts for familiar cyberthreats—52% of positive alerts map to known vulnerabilities—leaving dangerous blind spots for emerging tactics, techniques, and procedures (TTPs). This reactive posture slows proactive threat hunting and weakens readiness for novel attacks even as 75% of security leaders worry the SOC is losing pace with new cyberthreats.

Read the full report for the deeper story, including chief information security officer (CISO)-level takeaways, expanded data, and the complete analysis behind each operational pressure, as well as insights that can help security professionals strengthen their strategy and improve real world SOC outcomes.

What CISOs can do now to strengthen resilience

Security leaders have a clear path to easing today’s operational strain: unify the environment, automate what slows teams down, and elevate identity and endpoint as a single control plane. The shift is already underway as forward-leaning organizations focus on high-impact wins—automating routine lookups, reducing noise, streamlining triage, and eliminating the fragmentation and manual toil that drain analyst capacity. Identity remains the most critical failure point, and leaders increasingly view unified identity to endpoint protection as foundational to reducing exposure and restoring defender agility. And as environments unify, the strength of the underlying graph and data lake becomes essential for connecting signals at scale and accelerating every defender workflow.

As AI matures, leaders are also looking for governable, customizable approaches—not black box automation. They want AI agents they can shape to their environment, integrate deeply with their SIEM, and extend across cloud, identity, and on-premises signals. This mindset reflects a broader operational shift: modern key performance indicators (KPIs) will improve only when tools, workflows, and investigations are unified, and automation frees analysts for higher value work.

The report details a roadmap for CISOs that emphasizes unifying signals, embedding AI into core workflows, and strengthening identity as the primary control point for reducing risk. It shows how leaders can turn operational friction into strategic momentum by consolidating tools, automating routine investigation steps, elevating analysts to higher value work, and preparing their SOCs for a future defined by integrated visibility, adaptive defenses, and AI-assisted decision making.

Chart your path forward

The pressures facing today’s SOCs are real, but the path forward is increasingly clear. As this report shows, organizations that take these steps aren’t just reducing operational friction—they’re building a stronger foundation for rapid detection, decisive response, and long-term readiness. Read State of the SOC—Unify Now or Pay Later for deeper guidance, expanded findings, and a phased roadmap that can help security professionals chart the next era of their SOC evolution.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1The study, commissioned by Microsoft, was conducted by Omdia from June 25, 2025, to July 23, 2025. Survey respondents (N=300) included security professionals responsible for SOC operations at mid-market and enterprise organizations (more than 750 employees) across the United States, United Kingdom, and Australia and New Zealand. All statistics included in this post are from the study.

The post Unify now or pay later: New research exposes the operational cost of a fragmented SOC appeared first on Microsoft Security Blog.

]]>
The strategic SIEM buyer’s guide: Choosing an AI-ready platform for the agentic era http://approjects.co.za/?big=en-us/security/blog/2026/02/11/the-strategic-siem-buyers-guide-choosing-an-ai-ready-platform-for-the-agentic-era/ Wed, 11 Feb 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145140 New guide details how a unified, AI ready SIEM platform empowers security leaders to operate at the speed of AI, strengthen resilience, accelerate detection and response, and more.

The post The strategic SIEM buyer’s guide: Choosing an AI-ready platform for the agentic era appeared first on Microsoft Security Blog.

]]>
As the agentic era reshapes security operations, leaders face a strategic inflection point: legacy security information and event management (SIEM) solutions and fragmented toolchains can no longer keep pace with the scale, speed, and complexity of modern cyberthreats. Organizations can choose to spend the next year tuning and integrating their SIEM stack—or simplify the architecture and let a unified platform do the heavy lifting. If they choose a platform, it should make it inexpensive to ingest and retain more telemetry, automatically shape that data into analysis‑ready form, and enrich it with graph‑driven intelligence so both analysts and AI can quickly understand what matters and why. The strategic SIEM buyer’s guide outlines what decision‑makers should look for as they build a future‑ready security operations center (SOC). Read on for a preview of key concepts covered in the guide.

Build a unified, future-proof foundation

As organizations step into the agentic AI era, the priority shifts to establishing a security foundation that can absorb rapid change without adding operational drag. That requires an architecture built for flexibility—one that brings security data, analytics, and response capabilities together rather than scattering them across aging infrastructure. A unified, cloud‑native platform gives security teams the structural advantage of consistent visibility, elastic scale, and a single source of truth for both human analysts and AI systems. By consolidating core functions into one environment, leaders can modernize the SOC in a deliberate, sustainable way while positioning their teams to capitalize on emerging AI‑powered security capabilities.

Accelerate detection and response with AI

As cyberthreats evolve faster than traditional workflows can manage, the advantage shifts to SOCs that can elevate detection and response with adaptive automation. Modern platforms augment analysts with real‑time correlation, automated investigation, and adaptive orchestration that reduces manual steps and shortens exposure windows. By standardizing access to high‑quality security data and enabling agents to act on that context, organizations improve precision, reduce noise, and transition from reactive triage to continuous, intelligence‑driven response. This shift not only accelerates outcomes but frees teams to focus on higher‑value threat hunting and strategic risk reduction.

Maximize return on investment and accelerate time to value

Driving measurable value is now a leadership imperative, and modern SIEM platforms must deliver results without protracted deployments or heavy reliance on specialized expertise. AI-ready solutions reduce onboarding friction through prebuilt connectors, embedded analytics, and turnkey content that produce meaningful detection coverage within hours—not months.

“Microsoft Sentinel’s ease of use means we can go ahead and deploy our solutions much faster. It means we can get insights into how things are operating more quickly.”

—Director of IT in the healthcare industry

By consolidating core workflows into a single environment, organizations avoid the hidden costs of operating multiple tools and shorten the path from implementation to impact. As adaptive AI optimizes configurations, prioritizes coverage gaps, and streamlines operations, security leaders gain a clearer return on investment while reallocating resources toward strategic risk reduction instead of maintenance and integration work. AI‑ready solutions reduce onboarding friction through pre‑built connectors, embedded analytics, and turnkey content that produce meaningful detection coverage within hours—not months.

Turning guidance into action with Microsoft

The guide also outlines where Microsoft Sentinel delivers meaningful advantages for modern SOC leaders—from its cloud‑native scale and unified data foundation to integrated SIEM, security orchestration, automation, and response (SOAR), extended detection and response (XDR), and advanced analytics in a single AI‑ready platform. It includes practical tips for evaluating vendors, highlighting the importance of unification, cloud‑native elasticity, and avoiding fragmented add‑ons that drive hidden costs. Together, the three essentials—building a unified foundation, accelerating detection and response with AI, and maximizing return on investment through rapid time to value—establish a clear roadmap for modernizing security operations.

Read The strategic SIEM buyer’s guide for the full analysis, vendor considerations, and detailed guidance on selecting an AI‑ready platform for the agentic era.

Learn more

Learn more about Microsoft Sentinel or discover more about Microsoft Unified SecOps.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post The strategic SIEM buyer’s guide: Choosing an AI-ready platform for the agentic era appeared first on Microsoft Security Blog.

]]>
The security implementation gap: Why Microsoft is supporting Operation Winter SHIELD http://approjects.co.za/?big=en-us/security/blog/2026/02/05/the-security-implementation-gap-why-microsoft-is-supporting-operation-winter-shield/ Thu, 05 Feb 2026 17:00:00 +0000 Most security incidents happen in the gap between knowing what matters and actually implementing security controls consistently. Read how Microsoft is helping organizations close this implementation gap.

The post The security implementation gap: Why Microsoft is supporting Operation Winter SHIELD appeared first on Microsoft Security Blog.

]]>
Every conversation I have with information security leaders tends to land in the same place. People understand what matters. They know the frameworks, the controls, and the guidance. They can explain why identity security, patching, and access control are critical. And yet incidents keep happening for the same reasons.

Successful cyberattacks rarely depend on something novel. They succeed when basic controls are missing or inconsistently applied. Stolen credentials still work. Legacy authentication is still enabled. End-of-life systems remain connected and operational, though of course not well patched.

This is not a knowledge problem. It is an execution and follow through problem. We know what we’re supposed to do, but we need to get on with doing it. The gap between knowing what matters and enforcing it completely is where most real-world incidents occur.

If the basics were that easy to implement, everyone would have them in place already.

That gap is where cyberattackers operate most effectively, and it is the gap that Operation Winter SHIELD is designed to address as a collaborative effort across the public and private sector.

Why Operation Winter SHIELD matters

Operation Winter SHIELD is a nine-week cybersecurity initiative led by the FBI Cyber Division beginning February 2, 2026. The focus is not awareness or education for its own sake. The focus is on implementation. Specifically, how organizations operationalize the real security guidance that reduces risk in real environments.

This effort reflects a necessary shift in how we approach security at scale. Most organizations do not fail because they chose the wrong security product or the wrong framework. They fail because controls that look straightforward on paper are difficult to deploy consistently across complex, expanding environments.

Microsoft is providing implementation resources to help organizations focus on what actually changes outcomes. To do this, we’re sharing guidance on controls, like Baseline Security Mode that hold up under real world pressure, from real world threat actors.

What the FBI Cyber Division sees in real incidents

The FBI Cyber Division brings a perspective that is grounded in investigations. Their teams respond to incidents, support victim organizations through recovery, and build cases against the cybercriminal networks we defend against every day. This investigative perspective reveals which missing controls turn manageable events into prolonged incident crises.

That perspective aligns with what we see through Microsoft Threat Intelligence and Microsoft Incident Response. The patterns repeat across industries, geographies, and organization sizes.

Nation-sponsored threat actors exploit end-of-life infrastructure that no longer receives security updates. Ransomware operations move laterally using over privileged accounts and weak authentication. Criminal groups capitalize on misconfigurations that were understood but never fully addressed.

These are not edge cases. They are repeatable failures that cyberattackers rely on because they continue to work.

When incidents arise, it is rarely because defenders lacked guidance. It is because controls were incomplete, inconsistently enforced, or bypassed through legacy paths that remained open.

The reality of execution challenge

Defenders are not indifferent to these risks. They are certainly not unaware. They operate in environments defined by complexity, competing priorities, and limited resources. Controls that seem simple in isolation become difficult when they must be deployed across identities, devices, applications, and cloud services that were not designed at the same time.

In parallel, the cyberthreat landscape has matured. Initial access brokers sell credentials at scale. Ransomware operations function like businesses. Attack chains move quickly and often complete before the defenders can meaningfully intervene.

Detection windows shrink. Dwell time is no longer an actionable metric. The margin for error is smaller than it has ever been before.

Operation Winter SHIELD exists to narrow that margin by focusing attention on high impact control areas and showing how they can help defenders succeed when they are enforced.

Each week, we’ll focus on a high-impact control area informed by investigative insights drawn from active cases and long-term trends. This is not about introducing yet another security framework or hammering back again on the basics. It is about reinforcing what already works and confronting, honestly, why it is so often not fully implemented.

Moving from guidance to guardrails

Microsoft’s role in Operation Winter SHIELD is to help organizations move from insight to action. That means providing practical guidance, technical resources, and examples of how built-in platform capabilities can reduce the operational friction that slows deployment.

A central theme throughout the initiative is secure by default and by design. The fastest way to close implementation gaps is to reduce the number of decisions defenders must make under pressure. Controls that are enforced by default remove reliance on error-prone configurations and constant human vigilance.

Baseline Security Mode reflects this approach in practice. It enforces protections that harden identity and access across the environment. It blocks legacy authentication paths. It requires phish-resistant multifactor authentication for administrators. It surfaces legacy systems that are no longer supported. And it enforces least-privilege access patterns. These protections apply immediately when enabled and are informed by threat intelligence from Microsoft’s global visibility and lessons learned from thousands of incident response engagements.

The same guardrail model applies to the software supply chain. Build and deployment systems are frequent intrusion points because they are implicitly trusted and rarely governed with the same rigor as production environments. Enforcing identity isolation, signed artifacts, and least-privilege access for build pipelines reduces the risk that a single compromised developer account or token becomes a pathway into production.

These risks are not limited to technical pipelines alone. They are compounded when ownership, accountability, and enforcement mechanisms are unclear or inconsistently applied across the organization.

Governance controls only matter when they translate into enforceable technical outcomes. Requiring centralized ownership of security configuration, explicit exception handling, and continuous validation ensures that risk decisions are deliberate and traceable.

The objective is straightforward. Reduce the distance between guidance and guardrails. We must look to turn recommendations into protections that are consistently applied and continuously maintained.

What you can expect from Operation Winter SHIELD

Starting the week of February 2, 2026, you can expect focused guidance on the controls that have the greatest impact on reducing exposure to cybercrime. The initiative is not about creating new requirements. It is about improving execution of what already works.

Security maturity is not measured by what exists in policy documents or architecture diagrams. It is measured by what is enforced in production. It is measured by whether controls hold under real world conditions and whether they remain effective as environments change.

The cybercrime problem does not improve through awareness. It improves through execution, shared responsibility, and continued focus on closing the gaps threat actors exploit most reliably. You can expect to hear this guidance materialize on the FBI’s Cybercrime Division’s podcast, Ahead of the Threat, and a future episode of the Microsoft Threat Intelligence Podcast.

Building real resilience

Operation Winter SHIELD represents a focused effort to help organizations strengthen operational resilience. Microsoft’s contribution reflects a long-standing commitment to making security controls easier to deploy and more resilient over time.

Over the coming weeks and extending beyond this initiative, we will continue to share practical content designed to support organizations at every stage of their security maturity. Security is a process, not a product. The goal is not perfection, the goal is progress that threat actors feel. We will impose cost.

The gap between knowing what matters and doing it consistently is where threat actors have learned to operate. Closing that gap requires coordination, shared learning, and a willingness to prioritize enforcement over intention.

Operation Winter SHIELD offers an opportunity to drive systematic improvement to one control area at a time. Investigative experience explains why each control matters. Secure defaults and automation provide the path to implementation.

This work extends beyond any single awareness effort. The tactics threat actors use change quickly. The controls that reduce risk largely remain stable. What determines outcomes is how quickly and reliably those controls are put in place.

That is the work ahead. Moving from abstract ideas to real world security. Join me in going from knowing to doing.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post The security implementation gap: Why Microsoft is supporting Operation Winter SHIELD appeared first on Microsoft Security Blog.

]]>