Security management Insights | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/topic/security-management/ Expert coverage of cybersecurity topics Wed, 01 Apr 2026 16:28:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Threat actor abuse of AI accelerates from tool to cyberattack surface http://approjects.co.za/?big=en-us/security/blog/2026/04/02/threat-actor-abuse-of-ai-accelerates-from-tool-to-cyberattack-surface/ Thu, 02 Apr 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=146176 Generative AI is upgrading cyberattacks, from 450% higher phishing click‑through rates to industrialized MFA bypass.

The post Threat actor abuse of AI accelerates from tool to cyberattack surface appeared first on Microsoft Security Blog.

]]>
For the last year, one word has represented the conversation living at the intersection of AI and cybersecurity: speed. Speed matters, but it’s not the most important shift we are observing across the threat landscape today. Now, threat actors from nation states to cybercrime groups are embedding AI into how they plan, refine, and sustain cyberattacks. The objectives haven’t changed, but the tempo, iteration, and scale of generative AI enabled attacks are certainly upgrading them.

However, like defenders, there is typically a human-in-the-loop still powering these attacks, and not fully autonomous or agentic AI running campaigns. AI is reducing friction across the attack lifecycle; helping threat actors research faster, write better lures, vibe code malware, and triage stolen data. The security leaders I spoke with at RSAC™ 2026 Conference this week are prioritizing resources and strategy shifts to get ahead of this critical progression across the threat landscape.

The operational reality: Embedded, not emerging

The scale of what we are tracking makes the scope impossible to dismiss. Threat activity spans every region. The United States alone represents nearly 25% of observed activity, followed by the United Kingdom, Israel, and Germany. That volume reflects economic and geopolitical realities.1

But the bigger shift is not geographic, it’s operational. Threat actors are embedding AI into how they work across reconnaissance, malware development, and post-compromise operations. Objectives like credential theft, financial gain, and espionage might look familiar, but the precision, persistence, and scale behind them have changed.

Email is still the fastest inroad

Email remains the fastest and cheapest path to initial access. What has changed is the level of refinement that AI enables in crafting the message that gets someone to click.

When AI is embedded into phishing operations, we are seeing click-through rates reach 54%, compared to roughly 12% for more traditional campaigns. That is a 450% increase in effectiveness. That’s not the result of increased volume, but the result of improved precision. AI is helping threat actors localize content and adapt messaging to specific roles, reducing the friction in crafting a lure that converts into access. When you combine that improved effectiveness with infrastructure designed to bypass multifactor authentication (MFA), the result is phishing operations that are more resilient, more targeted, and significantly harder to defend at scale.

A 450% increase in click-through rates changes the risk calculus for every organization. It also signals that AI is not just being used to do more of the same, it is being used to do it better.

Tycoon2FA: What industrial-scale cybercrime looks like

Tycoon2FA is an example of how the actor we track as Storm-1747 shifted toward refinement and resilience. Understanding how it operated teaches us where threats might be headed, and fueled conversations in the briefing rooms at RSAC 2026 this week that focused on ecosystem instead of individual actors.

Tycoon2FA was not a phishing kit, it was a subscription platform that generated tens of millions of phishing emails per month. It was linked to nearly 100,000 compromised organizations since 2023. At its peak, it accounted for roughly 62% of all phishing attempts that Microsoft was blocking every month. This operation specialized in adversary-in-the-middle attacks designed to defeat MFA. It intercepted credentials and session tokens in real time and allowed attackers to authenticate as legitimate users without triggering alerts, even after passwords were reset.

But the technical capability is only part of the story. The bigger shift is structural. Storm-1747 was not operating alone. This was modular cybercrime: one service handled phishing templates, another provided infrastructure, another managed email distribution, another monetized access. It was effectively an assembly line for identity theft. The services were composable, scalable, and available by subscription.

This is the model that has changed the conversations this week: it is not about a single sophisticated actor; it is about an ecosystem that has industrialized access and lowers the barrier to entry for every actor that plugs into it. That is exactly what AI is doing across the broader threat landscape: making the capabilities of sophisticated actors available to everyone.

Disruption: Closing the threat intelligence loop

Our Digital Crimes Unit disrupted Tycoon2FA earlier this month, seizing 330 domains in coordination with Europol and industry partners. But the goal was not simply to take down websites. The goal was to apply pressure to a supply chain. Cybercrime today is about scalable service models that lower the barrier to entry. Identity is the primary target and MFA bypass is now packaged as a feature. Disrupting one service forces the market to adapt. Sustained pressure fragments the ecosystem. By targeting the economic engine behind attacks, we can reshape the risk environment.

Every time we disrupt an attack, it generates signal. The signal feeds intelligence. The intelligence strengthens detection. Detection is what drives response. That is how we turn threat actor actions into durable defenses, and how the work of disruption compounds over time. Microsoft’s ability to observe at scale, act at scale, and share intelligence at scale is the differentiation that matters. It makes a difference because of how we put it into practice.

AI across the full attack lifecycle

When we step back from any single campaign and look for a broader pattern, AI doesn’t show up in just one phase of an attack; it appears across the entire lifecycle. At RSAC 2026 this week, I offered a frame to help defenders prioritize their response:

  • In reconnaissance: AI accelerates infrastructure discovery and persona development, compressing the time between target selection and first contact. 
  • In resource development: AI generates forged documents, polished social engineering narratives, and supports infrastructure at scale. 
  • For initial access: AI refines voice overlays, deepfakes, and message customization using scraped data, producing lures that are increasingly difficult to distinguish from legitimate communications. 
  • In persistence and evasion: AI scales fake identities and automates communication that maintains attacker presence while blending with normal activity. 
  • In weaponization: AI enables malware development, payload regeneration, and real-time debugging, producing tooling that adapts to the victim environment rather than relying on static signatures. 
  • In post-compromise operations: AI adapts tooling to the specific victim environment and, in some cases, automates ransom negotiation itself. 

The objective has not changed: credential theft, financial gain, and espionage. What has changed is the tempo, the iteration speed, and the ability to test and refine at scale. AI is not just accelerating cyberattacks, it’s upgrading them.

What comes next

In my sessions at RSAC 2026 this week, I shared a set of themes that help define the AI-powered shift in the threat landscape.

The first is the agentic threat model. The scenarios we prepare for have changed. The barrier to launching sophisticated attacks has collapsed. What once required the resources of a nation-state or well-organized criminal enterprise is now accessible to a motivated individual with the right tools and the patience to use them. The techniques have not fundamentally changed; the precision, velocity, and volume have.

The second is the software supply chain. Knowing what software and agents you have deployed and being able to account for their behavior is not a compliance exercise. The agent ecosystem will become the most attacked surface in the enterprise. Organizations that cannot answer basic inventory questions about their agent environment will not be able to defend it.

The third is understanding the value of human talent in a security operation using agentic systems to scale. The security analyst as practitioner is giving way to the security analyst as orchestrator. The talent models organizations are hiring against today are already outdated. But technology can help protect humans who may make mistakes. Though it means auditability of agent decisions is a governance requirement today, not eventually. The SOC of the future demands a fundamentally different kind of defender.

The moment to lead with strategic clarity, ranked priorities, and a hardened posture for agentic accountability is now.

If AI is embedded across the attack lifecycle, intelligence and defense must be embedded across the lifecycle too. Microsoft Threat Intelligence will continue to track, publish, and act on what we are observing in real time. The patterns are visible. The intelligence is there.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2025.

The post Threat actor abuse of AI accelerates from tool to cyberattack surface appeared first on Microsoft Security Blog.

]]>
Help on the line: How a Microsoft Teams support call led to compromise http://approjects.co.za/?big=en-us/security/blog/2026/03/16/help-on-the-line-how-a-microsoft-teams-support-call-led-to-compromise/ Mon, 16 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145703 A DART investigation into a Microsoft Teams voice phishing attack shows how deception and trusted tools can enable identity-led intrusions and how to stop them.

The post Help on the line: How a Microsoft Teams support call led to compromise appeared first on Microsoft Security Blog.

]]>
In our eighth Cyberattack Series report, Microsoft Incident Response—the Detection and Response Team (DART)—investigates a recent identity-first, human-operated intrusion that relied less on exploiting software vulnerabilities and more on deception and legitimate tools. After a customer reached out for assistance in November 2025, DART uncovered a campaign built on persistent Microsoft Teams voice phishing (vishing), where a threat actor impersonated IT support and targeted multiple employees. Following two failed attempts, the threat actor ultimately convinced a third user to grant remote access through Quick Assist, enabling the initial compromise of a corporate device.

This case highlights a growing class of cyberattacks that exploit trust, collaboration platforms, and built-in tooling, and underscores why defenders must be prepared to detect and disrupt these techniques before they escalate. Read the full report to dive deeper into this vishing breach of trust.

What happened?

Once remote interactive access was established, the threat actor shifted from social engineering to hands-on keyboard compromise, steering the user toward a malicious website under their control. Evidence gathered from browser history and Quick Assist artifacts showed the user was prompted to enter corporate credentials into a spoofed web form, which then initiated the download of multiple malicious payloads. One of the earliest artifacts—a disguised Microsoft Installer (MSI) package—used trusted Windows mechanisms to sideload a malicious dynamic link library (DLL) and establish outbound command-and-control, allowing the threat actor to execute code under the guise of legitimate software.

Subsequent payloads expanded this foothold, introducing encrypted loaders, remote command execution through standard administrative tooling, and proxy-based connectivity to obscure threat actor activity. Over time, additional components enabled credential harvesting and session hijacking, giving the threat actor sustained, interactive control within the environment and the ability to operate using techniques designed to blend in with normal enterprise activity rather than trigger overt alarms.

Trust is the weak point: Threat actors increasingly exploit trust—not just software flaws—using social engineering inside collaboration platforms to gain initial access.1

How did Microsoft respond?

Given the growing pattern of identity-first intrusions that begin with collaboration-based social engineering, DART moved quickly to contain risk and validate scope. The team confirmed that the compromise originated from a successful Microsoft Teams voice phishing interaction and immediately prioritized actions to prevent identity or directory-level impact. Through focused investigation, we established that the activity was short-lived and limited in reach, allowing responders to concentrate on early-stage tooling and entry points to understand how access was achieved and constrained.

To disrupt the intrusion, DART conducted targeted eviction and applied tactical containment controls to protect privileged assets and restrict lateral movement. Using proprietary forensic and investigation tooling, the team collected and analyzed evidence across affected systems, validated that threat actor objectives were not met, and confirmed the absence of persistence mechanisms. These actions enabled rapid recovery while helping to ensure the environment was fully secured before declaring the incident resolved.

What can customers do to strengthen their defenses?

Human nature works against us in these cyberattacks. Employees are conditioned to be responsive, helpful, and collaborative, especially when requests appear to come from internal IT or support teams. Threat actors exploit that instinct, using voice phishing and collaboration tools to create a sense of urgency and legitimacy that can override caution in the moment.

To mitigate exposure, DART recommends organizations take deliberate steps to limit how social engineering attacks can propagate through Microsoft Teams and how legitimate remote access tools can be misused. This starts with tightening external collaboration by restricting inbound communications from unmanaged Teams accounts and implementing an allowlist model that permits contact only from trusted external domains. At the same time, organizations should review their use of remote monitoring and management tools, inventory what is truly required, and remove or disable utilities—such as Quick Assist—where they are unnecessary.

Together, these measures help shrink the attack surface, reduce opportunities for identity-driven compromise, and make it harder for threat actors to turn human trust into initial access, while preserving the collaboration employees rely on to do their work.

What is the Cyberattack Series?

In our Cyberattack Series, customers discover how DART investigates unique and notable attacks. For each cyberattack story, we share:

  • How the cyberattack happened.
  • How the breach was discovered.
  • Microsoft’s investigation and eviction of the threat actor.
  • Strategies to avoid similar cyberattacks.

DART is made up of highly skilled investigators, researchers, engineers, and analysts who specialize in handling global security incidents. We’re here for customers with dedicated experts to work with you before, during, and after a cybersecurity incident.

Learn more

To learn more about DART capabilities, please visit our website, or reach out to your Microsoft account manager or Premier Support contact. To learn more about the cybersecurity incidents described above, including more insights and information on how to protect your own organization, download the full report.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2025.

The post Help on the line: How a Microsoft Teams support call led to compromise appeared first on Microsoft Security Blog.

]]>
From transparency to action: What the latest Microsoft email security benchmark reveals http://approjects.co.za/?big=en-us/security/blog/2026/03/12/from-transparency-to-action-what-the-latest-microsoft-email-security-benchmark-reveals/ Thu, 12 Mar 2026 16:00:00 +0000 The latest Microsoft benchmarking data reveals how Microsoft Defender mitigates modern email threats compared to SEG and ICES vendors.

The post From transparency to action: What the latest Microsoft email security benchmark reveals appeared first on Microsoft Security Blog.

]]>
In our last benchmarking post, Clarity in complexity: New insights for transparent email security,1 we shared why transparency matters more than ever in email security and how clear, consistent benchmarking helps security teams cut through noise and make confident decisions.

Today, we’re continuing that conversation. With the latest Microsoft benchmarking data, we’re sharing what real-world telemetry reveals about how effectively modern email threats are detected, mitigated, and stopped by Microsoft Defender, secure email gateway (SEG) providers, and integrated cloud email security (ICES) solutions.

This is part of our ongoing commitment to openness: regularly publishing performance data so customers can see how protections perform at scale.

What’s new in the latest benchmarking data

The newest benchmarking results reflect updated telemetry across recent months and reinforce several consistent trends:

  • Microsoft Defender removes an average of 70.8% of malicious email post-delivery, helping reduce dwell time even when cyberthreats bypass initial filtering.
  • Layered protection matters. When Defender operates alongside ICES partners, organizations benefit from incremental detection gains across promotional, spam, and malicious messages.
  • Overlapping detections remain, meaning ICES solutions can flag the same messages and the incremental value-add can vary by scenario and email type.

This kind of data-driven visibility is critical for security teams who want to understand not just whether cyberthreats are blocked, but how and where defenses are adding value across the email attack lifecycle.

Benchmarking results for ICES vendors

Microsoft’s quarterly analysis shows that layering ICES solutions with Microsoft Defender continue to provide a benefit in reducing marketing and bulk email, improving their filtering by an average of 13.7%. This reduces inbox clutter and boosts user productivity in environments with high volumes of promotional email. For filtering of spam and malicious messages, the incremental gains remain modest, and the latest quarter shows a smaller uplift than the prior period—averaging 0.29% and 0.24% respectively, compared to 1.65% and 0.5% in the prior report.

Focusing only on malicious messages that reached the inbox, the latest quarter shows Microsoft Defender’s zero hour auto purge performing the majority of post‑delivery remediation—removing an average of 70.8% of these threats. ICES vendors provided additional post‑delivery filtering, contributing an average of 29.2%. Together, this highlights two points: post‑delivery remediation is a critical backstop when cyberthreats evade initial filtering, and in these results Microsoft Defender delivered most of the post‑delivery catch, while ICES vendors add incremental coverage in this scenario.

Benchmarking results for SEG vendors

For the SEG vendor benchmarking metrics, a cyberthreat was classified as “missed” if it was not detected prior to delivery. Using this definition, Microsoft Defender missed fewer high-severity cyberthreats than other solutions evaluated in the study, consistent with patterns observed in our prior benchmarking report.

Reinforcing our commitment to the ICES vendor ecosystem

Transparency doesn’t stop at Microsoft’s own detections. It also extends to how we work with partners.

When we introduced the Microsoft Defender for Office 365 ICES vendor ecosystem, our goal was clear: enable customers to integrate trusted, non-Microsoft email security solutions into a unified Defender experience, without fragmenting workflows or visibility.

That commitment continues today.

  • The ICES vendor ecosystem now includes four partners—Darktrace, KnowBe4, Cisco, and VIPRE Security Group—all integrated directly into Microsoft Defender across experiences such as Quarantine, Explorer, email entity pages, advanced hunting, and reporting.
  • Customers retain a single operational plane in the Defender portal, even when layering multiple email security technologies.
  • Integrations are deliberate and additive, designed to enhance protection and clarity without increasing operational complexity.
  • The ecosystem supports defense-in-depth strategies while preserving a single, coherent security experience.

The recent additions reinforce our belief that email security is strongest when it combines native platform intelligence with specialized partner capabilities, surfaced through a single pane of glass.

We continue to actively evaluate additional partnerships based on customer demand, detection quality, and the ability to deliver meaningful, differentiated signals.

Why this matters for security teams

Email remains one of the most targeted and exploited attack vectors, and modern campaigns rarely rely on a single technique or control gap.

By pairing transparent benchmarking with integrated, multi-vendor protection, security teams gain:

  • Clear insight into detection coverage across native and partner solutions.
  • Reduced investigation friction with unified views and workflows.
  • Confidence in layered defenses, backed by regularly published data.

This isn’t about claiming perfection. It’s about showing the work, sharing the numbers, and giving customers the information they need to make informed security decisions.

Looking ahead

We’ll continue to publish updated benchmarking insights on a regular basis, alongside ongoing investments in Microsoft Defender and the ICES vendor ecosystem.

To explore the latest benchmarking data and learn more about how Defender and ICES partners work together, access the benchmarking site.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Clarity in complexity: New insights for transparent email security, Microsoft. December 10, 2025.

The post From transparency to action: What the latest Microsoft email security benchmark reveals appeared first on Microsoft Security Blog.

]]>
Secure agentic AI for your Frontier Transformation http://approjects.co.za/?big=en-us/security/blog/2026/03/09/secure-agentic-ai-for-your-frontier-transformation/ Mon, 09 Mar 2026 13:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145370 We are announcing the next step to make Frontier Transformation real for customers across every industry with Wave 3 of Microsoft 365 Copilot, Microsoft Agent 365, and Microsoft 365 E7: The Frontier Suite.

The post Secure agentic AI for your Frontier Transformation appeared first on Microsoft Security Blog.

]]>
Today we shared the next step to make Frontier Transformation real for customers across every industry with Wave 3 of Microsoft 365 Copilot, Microsoft Agent 365, and Microsoft 365 E7: The Frontier Suite.

As our customers rapidly embrace agentic AI, chief information officers (CIOs), chief information security officers (CISOs), and security decision makers are asking urgent questions: How do I track and monitor all these agents? How do I know what they are doing? Do they have the right access? Can they leak sensitive data? Are they protected from cyberthreats? How do I govern them?

Agent 365 and Microsoft 365 E7: The Frontier Suite, generally available on May 1, 2026, are designed to help answer these questions and give organizations the confidence to go further with AI.

Agent 365—the control plane for agents

As organizations adopt agentic AI, growing visibility and security gaps can increase the risk of agents becoming double agents. Without a unified control plane, IT, security, and business teams lack visibility into which agents exist, how they behave, who has access to them, and what potential security risks exist across the enterprise. With Microsoft Agent 365 you now have a unified control plane for agents that enables IT, security, and business teams to work together to observe, govern, and secure agents across your organization—including agents built with Microsoft AI platforms and agents from our ecosystem partners—using new Microsoft Security capabilities built into their existing flow of work.

Here is what that looks like in practice:

As we are now running Agent 365 in production, Avanade has real visibility into agent activity, the ability to govern agent sprawl, control resource usage, and manage agents as identity-aware digital entities in Microsoft Entra. This significantly reduces operational and security risk, represents a critical step forward in operationalizing the agent lifecycle at scale, and underscores Microsoft’s commitment to responsible, production-ready AI.

—Aaron Reich, Chief Technology and Information Officer, Avanade

Key Agent 365 capabilities include:

Observability for every role

With Agent 365, IT, security, and business teams gain visibility into all Agent 365 managed agents in their environment, understand how they are used, and can act quickly on performance, behavior, and risk signals relevant to their role—from within existing tools and workflows.

  • Agent Registry provides an inventory of agents in your organization, including agents built with Microsoft AI platforms, ecosystem partner agents, and agents registered through APIs. This agent inventory is available to IT teams in the Microsoft 365 admin center. Security teams see the same unified agent inventory in their existing Microsoft Defender and Purview workflows.
  • Agent behavior and performance observability provides detailed reports about agent performance, adoption and usage metrics, an agent map, and activity details.
  • Agent risk signals across Microsoft Defender*, Entra, and Purview* help security teams evaluate agent risk—just like they do for users—and block agent actions based on agent compromise, sign-in anomalies, and risky data interactions. Defender assesses risk of agent compromise, Entra evaluates identity risk, and Purview evaluates insider risk. IT also has visibility into these risks in the Microsoft 365 admin center.
  • Security policy templates, starting with Microsoft Entra, automate collaboration between IT and security. They enable security teams to define tenant-wide security policies that IT leaders can then enforce in the Microsoft 365 admin center as they onboard new agents.

*These capabilities are in public preview and will continue to be on May 1.

Secure and govern agent access

Unmanaged agents may create significant risk, from accessing resources unchecked to accumulating excessive privileges and being misused by malicious actors. With Microsoft Entra capabilities included in Agent 365, you can secure agent identities and their access to resources.

  • Agent ID gives each agent a unique identity in Microsoft Entra, designed specifically for the needs of agents. With Agent ID, organizations can apply trusted access policies at scale, reduce gaps from unmanaged identities, and keep agent access aligned to existing organizational controls.
  • Identity Protection and Conditional Access for agents extend existing user policies that make real-time access decisions based on risks, device compliance from Microsoft Intune, and custom security attributes to agents working on behalf of a user. These policies help prevent compromise and help ensure that agents cannot be misused by malicious actors.
  • Identity Governance for agents enables identity leaders to limit agent access to only resources they need, with access packages that can be scoped to a subset of the users permissions, and includes the ability to audit access granted to agents.

Prevent data oversharing and ensure agent compliance

Microsoft Purview capabilities in Agent 365 provide comprehensive data security and compliance coverage for agents. You can protect agents from accessing sensitive data, prevent data leaks from risky insiders, and help ensure agents process data responsibly to support compliance with global regulations.

  • Data Security Posture Management provides visibility and insights into data risks for agents so data security admins can proactively mitigate those risks.
  • Information Protection helps ensure that agents inherit and honor Microsoft 365 data sensitivity labels so that they follow the same rules as users for handling sensitive data to prevent agent-led sensitive data leaks.
  • Inline Data Loss Prevention (DLP) for prompts to Microsoft Copilot Studio agents blocks sensitive information such as personally identifiable information, credit card numbers, and custom sensitive information types (SITs) from being processed in the runtime.
  • Insider Risk Management extends insider risk protection to agents to help ensure that risky agent interactions with sensitive data are blocked and flagged to data security admins.
  • Data Lifecycle Management enables data retention and deletion policies for prompts and agent-generated data so you can manage risk and liability by keeping the data that you need and deleting what you don’t.  
  • Audit and eDiscovery extend core compliance and records management capabilities to agents, treating AI agents as auditable entities alongside users and applications. This will help ensure that organizations can audit, investigate, and defensibly manage AI agent activity across the enterprise.
  • Communication Compliance extends to agent interactions to detect and enable human oversight of risky AI communications. This enables business leaders to extend their code of conduct and data compliance policies to AI communications.

Defend agents against emerging cyberthreats

To help you stay ahead of emerging cyberthreats, Agent 365 includes Microsoft Defender protections purpose-built to detect and mitigate specific AI vulnerabilities and threats such as prompt manipulation, model tampering, and agent-based attack chains.

  • Security posture management for Microsoft Foundry and Copilot Studio agents* detects misconfigurations and vulnerabilities in agents so security leaders can stay ahead of malicious actors by proactively resolving them before they become an attack vector.
  • Detection, investigation, and response for Foundry and Copilot Studio agents* enables the investigation and remediation of attacks that target agents and helps ensure that agents are accounted for in security investigations.
  • Runtime threat protection, investigation, and hunting** for agents that use the Agent 365 tools gateway, helps organizations detect, block, and investigate malicious agent activities.

Agent 365 will be generally available on May 1, 2026, and priced at $15 per user per month. Learn more about Agent 365.

*These capabilities are in public preview and will continue to be on May 1.

**This new capability will enter public preview in April 2026 and continue to be on May 1.

Microsoft 365 E7: The Frontier Suite

Microsoft 365 E7 brings together intelligence and trust to enable organizations to accelerate Frontier Transformation, equipping employees with AI across email, documents, meetings, spreadsheets, and business application surfaces. It also gives IT and security leaders the observability and governance needed to operate AI at enterprise scale.

Microsoft 365 E7 includes Microsoft 365 Copilot, Agent 365, Microsoft Entra Suite, and Microsoft 365 E5 with advanced Defender, Entra, Intune, and Purview security capabilities to help secure users, delivering comprehensive protection across users and agents. It will be available for purchase on May 1, 2026, at a retail price of $99 per user per month. Learn more about Microsoft 365 E7.

End-to-end security for the agentic era

Frontier Transformation is anchored in intelligence and trust, and trust starts with security. Microsoft Security capabilities help protect 1.6 million customers at the speed and scale of AI.1 With Agent 365, we are extending these enterprise-grade capabilities so organizations can observe, secure, and govern agents and delivering comprehensive protection across agents and users with Microsoft 365 E7.

Secure your Frontier Transformation today with Agent 365 and Microsoft 365 E7: The Frontier Suite. And join us at RSAC Conference 2026 to learn more about these new solutions and hear from industry experts and customers who are shaping how agents can be observed, governed, secured, and trusted in the real world.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Fiscal Year 2026 Second Quarter Earnings Conference Call.

The post Secure agentic AI for your Frontier Transformation appeared first on Microsoft Security Blog.

]]>
Women’s History Month: Encouraging women in cybersecurity at every career stage http://approjects.co.za/?big=en-us/security/blog/2026/03/05/womens-history-month-encouraging-women-in-cybersecurity-at-every-career-stage/ Thu, 05 Mar 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145412 This Women’s History Month, we explore ways to support the next generation of female defenders at every career stage.

The post Women’s History Month: Encouraging women in cybersecurity at every career stage appeared first on Microsoft Security Blog.

]]>
Women’s History Month—and International Women’s Day on March 8, 2026—always gives me pause for reflection. It’s a moment to think about how far we’ve come and think about who we choose to uplift as we look ahead.

Throughout my career, I’ve been inspired by extraordinary women leaders—trailblazers who broke barriers, opened doors, and reshaped what leadership in technology looks like. But today, I want to shine a light on another group that inspires me just as deeply: women early in their careers—the builders, learners, and question-askers who are defining the future of cybersecurity and developing their skills in the era of AI.

These women are entering the field at a moment of unprecedented complexity. Cyberthreats are accelerating. AI is reshaping how we defend, detect, and respond. And the stakes—for trust, safety, and resilience—have never been higher.

That’s exactly why it has never been more critical to have a wide range of experiences and perspectives in our defender community.

Be Cybersmart

Help educate everyone in your organization with cybersecurity awareness resources and training curated by the security experts at Microsoft.

Get the Be Cybersmart Kit.

Why diversity of perspectives is not optional in cybersecurity

Cybersecurity is fundamentally about understanding people—how they behave, how they make decisions, how systems can be misused, and where harm can occur. That’s why diversity of perspectives, backgrounds, experiences, and people is a security imperative.

The ISACA paper titled “The Value of Diversity and Inclusion in Cybersecurity” concludes that cybersecurity teams lacking diversity are at greater risk of engaging in limited threat modeling, exhibiting reduced innovation, and making less robust decisions in complex security environments. At Microsoft Security, we recognize that the cyberthreats we encounter are as varied and multifaceted as humanity itself.

To stay ahead, our teams must reflect that diversity across gender, background, culture, discipline, and lived experience.

When teams bring different perspectives to the table,

  • They ask better questions;
  • They surface risks earlier;
  • They design systems that work for more people;
  • And they build security that is resilient by design.

The power of women early in career and beyond

Women early in their career bring something incredibly powerful to cybersecurity and AI: fresh perspective paired with fearless curiosity. Women bring empathy, clarity, systems thinking, and collaborative leadership that directly strengthen our ability to detect cyberthreats, understand human behavior, and build secure products that work for everyone.

This makes me think of my valued friend and colleague, Lauren Buitta, who is the founder and chief executive officer (CEO) of Girl Security. Lauren has been a tireless advocate for providing women early in career—especially those from underrepresented backgrounds, with the skills and confidence needed to enter security careers. She often says, “Security isn’t just a discipline—it’s empowerment through knowledge.” That philosophy extends to Girl Security’s work preparing the next generation to navigate and lead in an AI-powered world. Her efforts show us that nurturing curiosity early on can have lasting effects throughout life.

They challenge assumptions that may no longer hold. They ask “why” before accepting “how.” They’re often the first to notice gaps—in data, in design, in who is represented and who is missing. Supporting women at this stage isn’t just about equity. It’s about strengthening the future of security itself. These actions build a stronger, more resilient security ecosystem.

Building and cultivating pathways for the next generation

Investing in women early in their cybersecurity and AI security careers is essential. Early access to education, opportunity, and confidence building experiences helps more women see themselves in this field—and choose to stay.

But if we stop there, we shouldn’t be surprised when the numbers don’t move.  In fact, independent global analyses from the Global Cybersecurity Forum and Boston Consulting Group show that women represent just 24% of the cybersecurity workforce worldwide—a figure reinforced by LinkedIn’s real-time labor market data. What I’ve realized is this: To change outcomes, we have to cultivate women throughout their careers—from first exposure to technical mastery, from early roles to leadership, and from individual contributor to decisionmaker. Otherwise, we’ll continue to bring women into the field without creating the conditions that allow them to grow, advance, and remain.

That means pairing early career investment with sustained support, inclusive cultures, and everyday actions that reinforce belonging and opportunity over time.

Here are meaningful steps we can all take—not just to widen the pipeline, but to strengthen it end to end:

1. Share stories from a diverse set of role models at every career stage.
Representation fuels imagination. When women early in career see themselves reflected in cybersecurity, they’re more likely to enter the field. When women midcareer and in senior roles see paths forward, they’re more likely to stay and lead.

2. Reevaluate job descriptions at entry and beyond.
Rigid expectations or narrow definitions of technical expertise discourage qualified candidates from applying, and can also limit progression into advanced or leadership roles.

3. Invest in inclusive training and early career programs and sustain learning over time.
Accessible, hands-on learning builds confidence early. Continued upskilling, reskilling, and leadership development ensure women can evolve alongside rapidly changing security and AI technologies.

4. Volunteer with organizations driving cybersecurity and AI education.
Groups like Girl Security and Women in CyberSecurity (WiCyS) are changing outcomes for thousands of girls and women. Your time, mentorship, or sponsorship helps build momentum early—and reinforces pathways later. I welcome you to join Nicole Ford, Vice President Customer Security Officer at Microsoft, who will be hosting a leadership lunch at the WiCyS conference to discuss cultivating leaders for the future and though advocacy and sponsorship.

5. Partner with community groups offering mentorship and sponsorship opportunities.
Mentorship is one of the strongest predictors of early career success. Sponsorship—advocacy that opens doors to stretch roles, visibility, and advancement—is critical for long term progression.

6. Be an ally every day across the full career journey.
Introduce emerging talent to your networks. Encourage them to speak up. Create space for them to lead. Advocate for their ideas in rooms they aren’t in yet—especially as stakes and visibility increase.

Our commitment—and our opportunity

At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. That starts by ensuring the next generation of cybersecurity and AI security professionals has equitable access to opportunity, education, and belonging.

This Women’s History Month, let’s celebrate not only the women who have led the way — but the women who are just getting started.

They’re actively shaping security today, not just influencing its future. Security is a team sport and we need everyone in this team because together, we can build a safer, more inclusive digital future for all.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Women’s History Month: Encouraging women in cybersecurity at every career stage appeared first on Microsoft Security Blog.

]]>
Threat modeling AI applications http://approjects.co.za/?big=en-us/security/blog/2026/02/26/threat-modeling-ai-applications/ Thu, 26 Feb 2026 17:04:08 +0000 AI threat modeling helps teams identify misuse, emergent risk, and failure modes in probabilistic and agentic AI systems.

The post Threat modeling AI applications appeared first on Microsoft Security Blog.

]]>
Proactively identifying, assessing, and addressing risk in AI systems

We cannot anticipate every misuse or emergent behavior in AI systems. We can, however, identify what can go wrong, assess how bad it could be, and design systems that help reduce the likelihood or impact of those failure modes. That is the role of threat modeling: a structured way to identify, analyze, and prioritize risks early so teams can prepare for and limit the impact of real‑world failures or adversarial exploits.

Traditional threat modeling evolved around deterministic software: known code paths, predictable inputs and outputs, and relatively stable failure modes. AI systems (especially generative and agentic systems) break many of those assumptions. As a result, threat modeling must be adapted to a fundamentally different risk profile.

Why AI changes threat modeling

Generative AI systems are probabilistic and operate over a highly complex input space. The same input can produce different outputs across executions, and meaning can vary widely based on language, context, and culture. As a result, AI systems require reasoning about ranges of likely behavior, including rare but high‑impact outcomes, rather than a single predictable execution path.

This complexity is amplified by uneven input coverage and resourcing. Models perform differently across languages, dialects, cultural contexts, and modalities, particularly in low‑resourced settings. These gaps make behavior harder to predict and test, and they matter even in the absence of malicious intent. For threat modeling teams, this means reasoning not only about adversarial inputs, but also about where limitations in training data or understanding may surface failures unexpectedly.

Against this backdrop, AI introduces a fundamental shift in how inputs influence system behavior. Traditional software treats untrusted input as data. AI systems treat conversation and instruction as part of a single input stream, where text—including adversarial text—can be interpreted as executable intent. This behavior extends beyond text: multimodal models jointly interpret images and audio as inputs that can influence intent and outcomes.

As AI systems act on this interpreted intent, external inputs can directly influence model behavior, tool use, and downstream actions. This creates new attack surfaces that do not map cleanly to classic threat models, reshaping the AI risk landscape.

Three characteristics drive this shift:

  • Nondeterminism: AI systems require reasoning about ranges of behavior rather than single outcomes, including rare but severe failures.
  • Instruction‑following bias: Models are optimized to be helpful and compliant, making prompt injection, coercion, and manipulation easier when data and instructions are blended by default.
  • System expansion through tools and memory: Agentic systems can invoke APIs, persist state, and trigger workflows autonomously, allowing failures to compound rapidly across components.

Together, these factors introduce familiar risks in unfamiliar forms: prompt injection and indirect prompt injection via external data, misuse of tools, privilege escalation through chaining, silent data exfiltration, and confidently wrong outputs treated as fact.

AI systems also surface human‑centered risks that traditional threat models often overlook, including erosion of trust, overreliance on incorrect outputs, reinforcement of bias, and harm caused by persuasive but wrong responses. Effective AI threat modeling must treat these risks as first‑class concerns, alongside technical and security failures.

Differences in Threat Modeling: Traditional vs. AI Systems
CategoryTraditional SystemsAI Systems
Types of ThreatsFocus on preventing data breaches, malware, and unauthorized access.Includes traditional risks, but also AI-specific risks like adversarial attacks, model theft, and data poisoning.
Data SensitivityFocus on protecting data in storage and transit (confidentiality, integrity).In addition to protecting data, focus on data quality and integrity since flawed data can impact AI decisions.
System BehaviorDeterministic behavior—follows set rules and logic.Adaptive and evolving behavior—AI learns from data, making it less predictable.
Risks of Harmful OutputsRisks are limited to system downtime, unauthorized access, or data corruption.AI can generate harmful content, like biased outputs, misinformation, or even offensive language.
Attack SurfacesFocuses on software, network, and hardware vulnerabilities.Expanded attack surface includes AI models themselves—risk of adversarial inputs, model inversion, and tampering.
Mitigation StrategiesUses encryption, patching, and secure coding practices.Requires traditional methods plus new techniques like adversarial testing, bias detection, and continuous validation.
Transparency and ExplainabilityLogs, audits, and monitoring provide transparency for system decisions.AI often functions like a “black box”—explainability tools are needed to understand and trust AI decisions.
Safety and EthicsSafety concerns are generally limited to system failures or outages.Ethical concerns include harmful AI outputs, safety risks (e.g., self-driving cars), and fairness in AI decisions.

Start with assets, not attacks

Effective threat modeling begins by being explicit about what you are protecting. In AI systems, assets extend well beyond databases and credentials.

Common assets include:

  • User safety, especially when systems generate guidance that may influence actions.
  • User trust in system outputs and behavior.
  • Privacy and security of sensitive user and business data.
  • Integrity of instructions, prompts, and contextual data.
  • Integrity of agent actions and downstream effects.

Teams often under-protect abstract assets like trust or correctness, even though failures here cause the most lasting damage. Being explicit about assets also forces hard questions: What actions should this system never take? Some risks are unacceptable regardless of potential benefit, and threat modeling should surface those boundaries early.

Understand the system you’re actually building

Threat modeling only works when grounded in the system as it truly operates, not the simplified version of design docs.

For AI systems, this means understanding:

  • How users actually interact with the system.
  • How prompts, memory, and context are assembled and transformed.
  • Which external data sources are ingested, and under what trust assumptions.
  • What tools or APIs the system can invoke.
  • Whether actions are reactive or autonomous.
  • Where human approval is required and how it is enforced.

In AI systems, the prompt assembly pipeline is a first-class security boundary. Context retrieval, transformation, persistence, and reuse are where trust assumptions quietly accumulate. Many teams find that AI systems are more likely to fail in the gaps between components — where intent and control are implicit rather than enforced — than at their most obvious boundaries.

Model misuse and accidents 

AI systems are attractive targets because they are flexible and easy to abuse. Threat modeling has always focused on motivated adversaries:

  • Who is the adversary?
  • What are they trying to achieve?
  • How could the system help them (intentionally or not)?

Examples include extracting sensitive data through crafted prompts, coercing agents into misusing tools, triggering high-impact actions via indirect inputs, or manipulating outputs to mislead downstream users.

With AI systems, threat modeling must also account for accidental misuse—failures that emerge without malicious intent but still cause real harm. Common patterns include:

  • Overestimation of Intelligence: Users may assume AI systems are more capable, accurate, or reliable than they are, treating outputs as expert judgment rather than probabilistic responses.
  • Unintended Use: Users may apply AI outputs outside the context they were designed for, or assume safeguards exist where they do not.
  • Overreliance: When users accept incorrect or incomplete AI outputs, typically because AI system design makes it difficult to spot errors.

Every boundary where external data can influence prompts, memory, or actions should be treated as high-risk by default. If a feature cannot be defended without unacceptable stakeholder harm, that is a signal to rethink the feature, not to accept the risk by default.

Use impact to determine priority, and likelihood to shape response

Not all failures are equal. Some are rare but catastrophic; others are frequent but contained. For AI systems operating at a massive scale, even low‑likelihood events can surface in real deployments.

Historically risk management multiplies impact by likelihood to prioritize risks. This doesn’t work for massively scaled systems. A behavior that occurs once in a million interactions may occur thousands of times per day in global deployment. Multiplying high impact by low likelihood often creates false comfort and pressure to dismiss severe risks as “unlikely.” That is a warning sign to look more closely at the threat, not justification to look away from it.

A more useful framing separates prioritization from response:

  • Impact drives priority: High-severity risks demand attention regardless of frequency.
  • Likelihood shapes response: Rare but severe failures may rely on manual escalation and human review; frequent failures require automated, scalable controls.
Figure 1 Impact, Likelihood, and Mitigation by Alyssa Ofstein.

Every identified threat needs an explicit response plan. “Low likelihood” is not a stopping point, especially in probabilistic systems where drift and compounding effects are expected.

Design mitigations into the architecture

AI behavior emerges from interactions between models, data, tools, and users. Effective mitigations must be architectural, designed to constrain failure rather than react to it.

Common architectural mitigations include:

  • Clear separation between system instructions and untrusted content.
  • Explicit marking or encoding of untrusted external data.
  • Least-privilege access to tools and actions.
  • Allow lists for retrieval and external calls.
  • Human-in-the-loop approval for high-risk or irreversible actions.
  • Validation and redaction of outputs before data leaves the system.

These controls assume the model may misunderstand intent. Whereas traditional threat modeling assumes that risks can be 100% mitigated, AI threat modeling focuses on limiting blast radius rather than enforcing perfect behavior. Residual risk for AI systems is not a failure of engineering; it is an expected property of non-determinism. Threat modeling helps teams manage that risk deliberately, through defense in depth and layered controls.

Detection, observability, and response

Threat modeling does not end at prevention. In complex AI systems, some failures are inevitable, and visibility often determines whether incidents are contained or systemic.

Strong observability enables:

  • Detection of misuse or anomalous behavior.
  • Attribution to specific inputs, agents, tools, or data sources.
  • Accountability through traceable, reviewable actions.
  • Learning from real-world behavior rather than assumptions.

In practice, systems need logging of prompts and context, clear attribution of actions, signals when untrusted data influences outputs, and audit trails that support forensic analysis. This observability turns AI behavior from something teams hope is safe into something they can verify, debug, and improve over time.

 Response mechanisms build on this foundation. Some classes of abuse or failure can be handled automatically, such as rate limiting, access revocation, or feature disablement. Others require human judgment, particularly when user impact or safety is involved. What matters most is that response paths are designed intentionally, not improvised under pressure.

Threat modeling as an ongoing discipline

AI threat modeling is not a specialized activity reserved for security teams. It is a shared responsibility across engineering, product, and design.

The most resilient systems are built by teams that treat threat modeling as one part of a continuous design discipline — shaping architecture, constraining ambition, and keeping human impact in view. As AI systems become more autonomous and embedded in real workflows, the cost of getting this wrong increases.

Get started with AI threat modeling by doing three things:

  1. Map where untrusted data enters your system.
  2. Set clear “never do” boundaries.
  3. Design detection and response for failures at scale.

As AI systems and threats change, these practices should be reviewed often, not just once. Thoughtful threat modeling, applied early and revisited often, remains an important tool for building AI systems that better earn and maintain trust over time

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Threat modeling AI applications appeared first on Microsoft Security Blog.

]]>
Scaling security operations with Microsoft Defender autonomous defense and expert-led services http://approjects.co.za/?big=en-us/security/blog/2026/02/24/scaling-security-operations-with-microsoft-defender-autonomous-defense-and-expert-led-services/ Tue, 24 Feb 2026 13:00:00 +0000 AI-powered cyberattacks outpace aging SOC tools, and this new guide explains why manual defense fails and how autonomous, expert-led security transforms modern protection.

The post Scaling security operations with Microsoft Defender autonomous defense and expert-led services appeared first on Microsoft Security Blog.

]]>
Today’s security leaders are operating in an environment of truncated cyberattack timelines with aging defenses built for slower, linear cyberthreats that can no longer keep pace with advanced cyberthreats. AI-powered threat actors now use social engineering and malware that adapt in real time, allowing a single phishing message to escalate into a multidomain compromise within minutes. In many organizations, however, the bigger challenge lies closer to home: Years of accumulated technical debt inside the security operations center (SOC) and best-of-breed security investments have left many teams grappling with stitched together siloed tools, each producing fragments of insight that analysts must manually piece together. They’re also struggling with closing the skills gap and finding the right expertise.

The new e-book, Unlocking Microsoft Defender: A guide to autonomous defense and expert-led security, explores why this model has become unsustainable and how organizations can shift to a more integrated approach to modern defense. Implementing genuine SOC transformation is no easy task, and many organizations seek outside expertise to affect real change. Sign up to download the e-book now and learn more about topics like how autonomous defense paired with human judgment can help organizations tackle today’s toughest cyberthreats, and how adding services from Microsoft Security Experts can help defend against threats, build cyber resilience, and modernize security operations.

WASTED EFFORT: 20% of an analyst’s week—one full workday in five—is lost to manual toil.1

Why autonomous defense is now the standard

To keep pace with this new class of threat actor, security teams need to move beyond incremental automation and fundamentally rethink how defense operates. For years, SOCs have relied on manual triage—analysts chasing large volumes of low confidence alerts across disconnected tools. Security orchestration, automation, and response (SOAR) platforms improved efficiency by automating known responses, but they remain reactive by design, engaging only after an incident has already taken shape. This model struggles when attacks unfold in minutes, not days.

ALERT OVERLOAD: 42% of alerts go uninvestigated simply due to capacity constraints.1

The next evolution is an agentic SOC—one where defense is driven by continuous signal correlation, automated decision making, and human expertise applied where it matters most. Microsoft Defender XDR provides a unified operational layer across domains, closing visibility gaps created by siloed tools and enabling automated disruption of complex attacks before they escalate. By shifting routine investigation and response to AI-powered agents, security teams can reduce response time, contain cyberthreats earlier, and refocus human effort on proactive hunting, strategic analysis, and resilience rather than constant firefighting.

The blueprint for autonomous defense

The shift toward autonomous defense starts with unifying how security operations work. Fragmented tools force teams to interpret cyberthreats one signal at a time, leaving context scattered and response uneven. The guide explores how coordinated defense brings threat signals and protection actions together, revealing patterns that individual alerts may never reveal on their own. Instead of adjudicating noise, teams gain clear attack narratives that support faster, more confident decisions.

Autonomous defense builds on that foundation by using AI to act early in the attack lifecycle—not after damage is done. The e-book examines how modern platforms can contain in-progress threats and anticipate attacker movement, reducing reliance on manual escalation and static response models. The result is a SOC that spends less time reacting to incidents and more time shaping security outcomes—an operating model designed for speed, scale, and the inevitability of attack.

See how Microsoft Security Experts uncover fake remote workers

In the e‑book, we explore how autonomous defense is most effective when paired with human judgment and deep experience managing real incidents. Automated protection serves as the foundational security layer, blocking cyberthreats at machine speed, and reducing operational strain. When cyberattacks evolve or escalate, expert‑led hunting and managed detection and response bring global threat intelligence and real‑world insight to contain incidents and strengthen defenses. Human insights feed back into the platform, continuously improving automated protections and sharpening the organization’s overall security posture. In this video, we share a story of how fake profiles and fabricated identities can sometimes appear all too real.

Turn autonomous defense into resilient security

The e-book includes information about how organizations layer expertise at every stage of modern defense—combining autonomous protection with continuous human insight. Microsoft Security Experts helps in three key ways: with technical advisory to help modernize security operations, managed extended detection and response for around the clock defense against cyberthreats, and incident response and planning to build cyber resilience. The e-book further explains how this model emphasizes earlier threat discovery, reduced noise, and faster, more confident decision‑making as part of day‑to‑day security operations.

Sign up to download the e-book and read about how intelligence‑led incident response and direct access to security advisors can help organizations build long‑term resilience—not just recover from individual incidents. With expert guidance on readiness, response, and platform optimization, security teams can modernize operations, reduce integration overhead, and measurably improve outcomes. The result is a more resilient security program—one that resolves cyberthreats faster, lowers breach risk, consolidates cost, and enables teams to focus on solving meaningful security problems rather than chasing alerts.

Learn more about the Microsoft Defender Experts Suite

As security teams confront faster, more complex cyberattacks—and persistent gaps in skills and capacity—many are looking for practical ways to strengthen defenses without adding operational strain. The Microsoft Defender Experts Suite provides expert‑led security services to help organizations defend against advanced cyberthreats, improve resilience, and modernize security operations. If you’re exploring how to combine autonomous protection with continuous human expertise, read the full announcement for deeper context on what’s new and how these services work together.

Learn more

Learn more about Microsoft Security Experts and Microsoft Defender XDR.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 


1Microsoft and Omdia, State of the SOC: Unify Now or Pay Later report, 2026.

The post Scaling security operations with Microsoft Defender autonomous defense and expert-led services appeared first on Microsoft Security Blog.

]]>
New e-book: Establishing a proactive defense with Microsoft Security Exposure Management http://approjects.co.za/?big=en-us/security/blog/2026/02/19/new-e-book-establishing-a-proactive-defense-with-microsoft-security-exposure-management/ Thu, 19 Feb 2026 17:00:00 +0000 Read the new maturity-based guide that helps organizations move from fragmented, reactive security practices to a unified exposure management approach that enables proactive defense.

The post New e-book: Establishing a proactive defense with Microsoft Security Exposure Management appeared first on Microsoft Security Blog.

]]>
Effective exposure management begins by illuminating and hardening risks across the entire attack surface. Some of the most meaningful shifts in security happen quietly—when teams take a clear look at their exposure landscape and acknowledge the gap between where they stand today and where they need to be. Today, we’re sharing a new guide designed to support that moment of clarity. It offers a practical, maturity-based path for moving from fragmented visibility and reactive fixes to a more unified, risk-driven approach that strengthens resilience one step at a time. Read “Establishing proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management” to learn more now. 

Five levels of exposure management maturity 

In the guide, you’ll learn how organizations progress through five levels of exposure management maturity to strengthen how they identify, prioritize, and act on risk. Early-stage teams operate reactively with limited visibility and compliance-driven fixes. As capabilities mature, processes become consistent, prioritization incorporates business context, and decisions shift from reactive to proactive. This progression reflects a move away from isolated security actions toward repeatable, measurable practices that scale with organizational complexity. At higher maturity, organizations validate controls, consolidate asset and risk data into a single source of truth, and confirm that mitigations work. Rather than assuming security improvements are effective, teams test and verify outcomes to ensure effort translates into real risk reduction. At the most advanced stage, exposure management is fully aligned to business objectives, supported by clear risk metrics, and used to guide remediation, resource allocation, and strategic outcomes.

The maturity model helps security leaders assess where their organization is at and identify practical next steps to mature and have a full-fledged exposure management program. Each level in the guide includes details on the realities organizations face, the key characteristics at each maturity level, common pain points, and suggestions for moving forward and up in maturity. Importantly, the model emphasizes that maturity is not static or final. The last stage of the maturity model, level five, isn’t a finish line—it’s the point where exposure management becomes a continuously evolving capability, fueled by real-time telemetry and adaptive risk modeling. At this stage, exposure management shifts from a program to a strategic discipline—one that informs long-term resilience decisions rather than discrete remediation cycles. 

The path to proactive defense  

Organizations build a unified path to proactive defense when they move beyond fragmented tools and adopt an integrated exposure management approach. By bringing assets, identities, cloud posture, and attack paths into one coherent view, security teams gain the clarity needed to focus effort where it matters most. This alignment enables more consistent action, stronger prioritization, and security decisions that reflect real business risk instead of isolated signals. It also helps teams move from chasing individual findings to managing exposure systematically, with shared context across security, IT, and risk stakeholders. Over time, this shift turns exposure management into a repeatable operating model rather than a collection of disconnected responses. 

Take the next step toward proactive defense 

Designed to help security leaders translate strategy into practical next steps, regardless of where they are starting, the maturity levels outlined in the e-book support organizations as they shift from reacting to cyberthreats to proactively reducing risk and strengthening security across every layer of the environment. To go deeper into the practices, maturity levels, and actions that matter most, read the new e-book: Establishing a proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management to learn more now. 

Join us at RSAC™ 2026

RSAC™ 2026 is more than a conference. It’s a chance to shape the future of security. By engaging with Microsoft Security, you’ll gain:  

  • Actionable insights from industry leaders and researchers.  
  • Hands-on experience with cutting-edge security tools.  
  • Connections that help you navigate the evolving cyberthreat landscape.  

Together, we can make the world safer for all. Join us in San Francisco March 22-26, 2026, and be part of the conversation that defines the next era of cybersecurity.  

Learn more

Learn more about Microsoft Security Exposure Management.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post New e-book: Establishing a proactive defense with Microsoft Security Exposure Management appeared first on Microsoft Security Blog.

]]>
Unify now or pay later: New research exposes the operational cost of a fragmented SOC http://approjects.co.za/?big=en-us/security/blog/2026/02/17/unify-now-or-pay-later-new-research-exposes-the-operational-cost-of-a-fragmented-soc/ Tue, 17 Feb 2026 17:00:00 +0000 New research from Microsoft and Omdia reveals how fragmented tools, manual workflows, and alert overload are pushing SOCs to a breaking point.

The post Unify now or pay later: New research exposes the operational cost of a fragmented SOC appeared first on Microsoft Security Blog.

]]>
Security operations are entering a pivotal moment: the operating model that grew around network logs and phishing emails is now buckling under tool sprawl, manual triage, and threat actors that outpace defender capacity. New research from Microsoft and Omdia shows just how heavy the burden can be—security operations centers (SOCs) juggle double-digit consoles, teams manually ingest data several times a week, and nearly half of all alerts go uninvestigated. The result is a growing gap between cyberattacker speed and defender capacity. Read State of the SOC—Unify Now or Pay Later to learn how hidden operational pressures impact resilience—compelling evidence to why unification, automation, and AI-powered workflows are quickly becoming non-negotiables for modern SOC performance.

The forces pushing modern SOC operations to a breaking point

The report surfaces five specific operational pressures shaping the modern SOC—spanning fragmentation, manual toil, signal overload, business-level risk exposure, and detection bias. Separately, each data point is striking. But taken together, they reveal a more consequential reality: analysts spend their time stitching context across consoles and working through endless queues, while real cyberattacks move in parallel. When investigations stall and alerts go untriaged, missed signals don’t just hurt metrics—they create the conditions for preventable compromises. Let’s take a closer look at each of the five issues:

1. Fragmentation

Fragmented tools and disconnected data force analysts to pivot across an average of 10.9 consoles1 and manually reconstruct context, slowing investigations and increasing the likelihood of missed signals. These gaps compound when only about 59% of tools push data to the security information and event management (SIEM), leaving most SOCs manually ingesting data and operating with incomplete visibility.

2. Manual toil

Manual, repetitive data work consumes an outsized share of analyst capacity, with 66% of SOCs losing 20% of their week to aggregation and correlation—an operational drain that delays investigations, suppresses threat hunting, and weakens the SOC’s ability to reduce real risk.

3. Security signal overload

Surging alert volumes bury analysts in noise with an estimated 46% of alerts proving false positives and 42% going uninvestigated, overwhelming capacity, driving fatigue, and increasing the likelihood real cyberthreats slip through unnoticed.

4. Operational gaps

Operational gaps are directly translating into business disrupting incidents, with 91% of security leaders reporting serious events and more than half experiencing five or more in the past year—exposing organizations to financial loss, downtime, and reputational damage.

5. Detection bias

Detection bias keeps SOCs focused on tuning alerts for familiar cyberthreats—52% of positive alerts map to known vulnerabilities—leaving dangerous blind spots for emerging tactics, techniques, and procedures (TTPs). This reactive posture slows proactive threat hunting and weakens readiness for novel attacks even as 75% of security leaders worry the SOC is losing pace with new cyberthreats.

Read the full report for the deeper story, including chief information security officer (CISO)-level takeaways, expanded data, and the complete analysis behind each operational pressure, as well as insights that can help security professionals strengthen their strategy and improve real world SOC outcomes.

What CISOs can do now to strengthen resilience

Security leaders have a clear path to easing today’s operational strain: unify the environment, automate what slows teams down, and elevate identity and endpoint as a single control plane. The shift is already underway as forward-leaning organizations focus on high-impact wins—automating routine lookups, reducing noise, streamlining triage, and eliminating the fragmentation and manual toil that drain analyst capacity. Identity remains the most critical failure point, and leaders increasingly view unified identity to endpoint protection as foundational to reducing exposure and restoring defender agility. And as environments unify, the strength of the underlying graph and data lake becomes essential for connecting signals at scale and accelerating every defender workflow.

As AI matures, leaders are also looking for governable, customizable approaches—not black box automation. They want AI agents they can shape to their environment, integrate deeply with their SIEM, and extend across cloud, identity, and on-premises signals. This mindset reflects a broader operational shift: modern key performance indicators (KPIs) will improve only when tools, workflows, and investigations are unified, and automation frees analysts for higher value work.

The report details a roadmap for CISOs that emphasizes unifying signals, embedding AI into core workflows, and strengthening identity as the primary control point for reducing risk. It shows how leaders can turn operational friction into strategic momentum by consolidating tools, automating routine investigation steps, elevating analysts to higher value work, and preparing their SOCs for a future defined by integrated visibility, adaptive defenses, and AI-assisted decision making.

Chart your path forward

The pressures facing today’s SOCs are real, but the path forward is increasingly clear. As this report shows, organizations that take these steps aren’t just reducing operational friction—they’re building a stronger foundation for rapid detection, decisive response, and long-term readiness. Read State of the SOC—Unify Now or Pay Later for deeper guidance, expanded findings, and a phased roadmap that can help security professionals chart the next era of their SOC evolution.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1The study, commissioned by Microsoft, was conducted by Omdia from June 25, 2025, to July 23, 2025. Survey respondents (N=300) included security professionals responsible for SOC operations at mid-market and enterprise organizations (more than 750 employees) across the United States, United Kingdom, and Australia and New Zealand. All statistics included in this post are from the study.

The post Unify now or pay later: New research exposes the operational cost of a fragmented SOC appeared first on Microsoft Security Blog.

]]>
Your complete guide to Microsoft experiences at RSAC™ 2026 Conference http://approjects.co.za/?big=en-us/security/blog/2026/02/12/your-complete-guide-to-microsoft-experiences-at-rsac-2026-conference/ Thu, 12 Feb 2026 17:00:00 +0000 Microsoft Security returns to RSAC Conference to show how Frontier Firms—organizations that are human-led and agent-operated—can stay ahead.

The post Your complete guide to Microsoft experiences at RSAC™ 2026 Conference appeared first on Microsoft Security Blog.

]]>
The era of AI is reshaping both opportunity and risk faster than any shift security leaders have seen. Every organization is feeling the momentum; and for security teams, the question is no longer if AI will transform their work, but how to stay ahead of what comes next.

At Microsoft, we see this moment giving rise to what we call the Frontier Firm: organizations that are human-led and agent-operated. With more than 80% of leaders already using agents or planning to within the year, we’re entering a world where every person may soon have an entire agentic team at their side1. By 2028, IDC projects 1.3 billion agents in use—a scale that changes everything about how we work and how we secure2.

In the agentic era, security must be ambient and autonomous, just like the AI it protects. This is our vision for security as the core primitive, woven into and around everything we build and throughout everything we do. At RSAC™ 2026 Conference, we’ll share how we are delivering on that vision through our AI-first, end-to-end, security platform that helps you protect every layer of the AI stack and secure with agentic AI.

Join us at RSAC Conference 2026—March 22–26 in San Francisco

RSAC 2026 will give you a front‑row seat to how AI is transforming the global threat landscape, and how defenders can stay ahead with:

  • A deeper understanding of how AI is reshaping the global threat landscape
  • Insight into how Microsoft can help you protect every layer of the AI stack and secure with agentic AI
  • Product demos, curated sessions, executive conversations, and live meetings with our experts in the booth

This is your moment to see what’s next and what’s possible as we enter the era of agentic security.

Microsoft at RSAC™ 2026

From Microsoft Pre‑Day to innovation sessions, networking opportunities, and 1:1 meetings, explore experiences designed to help you navigate the age of AI with clarity and impact.

Microsoft Pre-Day: Your first look at what’s next in security

Kick off RSAC 2026 on Sunday, March 22 at the Palace Hotel for Microsoft Pre‑Day, an exclusive experience designed to set the tone for the week ahead.

Hear keynote insights from Vasu Jakkal, CVP of Microsoft Security Business and other Microsoft security leaders as they explore how AI and agents are reshaping the security landscape.

You’ll discover how Microsoft is advancing agentic defense, informed by more than 100 trillion security signals each day. You’ll learn how solutions like Agent 365 deliver observability at every layer, and how Microsoft’s purpose‑built security capabilities help you secure every layer of the AI stack. You’ll also explore how our expert-led services can help you defend against cyberthreats, build cyber resilience, and transform your security operations.

The experience concludes with opportunities to connect, including a networking reception and an invite-only dinner for CISOs and security executives.

Microsoft Pre‑Day is your chance to hear what is coming next and prepare for the week ahead. Secure your spot today.

Executive events: Exclusive access to insights, strategy, and connections

For CISOs and senior security decision makers, RSAC 2026 offers curated experiences designed to deliver maximum value:

  • CISO Dinner (Sunday, March 22): Join Microsoft Security executives and fellow CISOs for an intimate dinner following Microsoft Pre-Day. Share insights, compare strategies, and build connections that matter.
  • The CISO and CIO Mandate for Securing and Governing AI (Monday, March 23): A session outlining why organizations need integrated AI security and governance to manage new risks and accelerate responsible innovation.
  • Executive Lunch & Learn: AI Agents are here! Are you Ready? (Tuesday, March 24): A panel exploring how observability, governance, and security are essential to safely scaling AI agents and unlocking human potential.
  • The AI Risk Equation: Visibility, Control, and Threat Acceleration (Wednesday, March 25): A deeply interactive discussion on how CISOs address AI proliferation, visibility challenges, and expanding attack surfaces while guiding enterprise risk strategy.
  • Post-Day Forum (Thursday, March 26): Wrap up RSAC with an immersive, half‑day program at the Microsoft Experience Center in Silicon Valley—designed for deeper conversations, direct access to Microsoft’s security and AI experts, and collaborative sessions that go beyond the main‑stage content. Explore securing and managing AI agents, protecting multicloud environments, and deploying agentic AI through interactive discussions. Transportation from the city center will be provided. Space is limited, so register early.

These experiences are designed to help CISOs move beyond theory and into actionable strategies for securing their organizations in an AI-first world.

Keynote and sessions: Insights you can act on

On Monday, March 23, don’t miss the RSAC 2026 keynote featuring Vasu Jakkal, CVP of Microsoft Security. In Ambient and Autonomous Security: Building Trust in the Agentic AI Era (3:55 PM-4:15 PM PDT), learn how ambient, autonomous platforms with deep observability are evolving to address AI-powered threats and build a trusted digital foundation.

Here are two sessions you don’t want to miss:

1. Security, Governance, and Control for Agentic AI 

  • Monday, March 23 | 2:20–3:10 PM. Learn the core principles that keep autonomous agents secure and governed so organizations can innovate with AI without sprawl, misuse, or unintended actions.
    • Speakers: Neta Haiby, Partner, Product Manager and Tina Ying, Director, Product Marketing, Microsoft 

2. Advancing Cyber Defense in the Era of AI Driven Threats 

  • Tuesday, March 24 | 9:40–10:30 AM. Explore how AI elevates threat sophistication and what resilient, intelligence-driven defenses look like in this new era.
    • Speaker: Brad Sarsfield, Senior Director, Microsoft Security, NEXT.ai

Plus, don’t miss our sessions throughout the week: 

Microsoft Booth #5744: Theater sessions and interactive experiences

Visit the Microsoft booth at Moscone Center for an immersive look at how modern security teams protect AI‑powered environments. Connect with Microsoft experts, explore security and governance capabilities built for agentic AI, and see how solutions work together across identity, data, cloud, and security operations.

People talking near a Microsoft Security booth.

Test your skills and compete in security games

At the center of the booth is an interactive single‑player experience that puts you in a high‑stakes security scenario, working with adaptive agents to triage incidents, optimize conditional access, surface threat intelligence, and keep endpoints secure and compliant, then guiding you to demo stations for deeper exploration.

Quick sessions, big takeaways, plus a custom pet sticker

You can also stop by the booth theater for short, expert‑led sessions highlighting real‑world use cases and practical guidance, giving you a clear view of how to strengthen your security approach across the AI landscape—and while you’re there, don’t miss the Security Companion Sticker activation, where you can upload a photo of your pet and receive a curated AI-generated sticker.

Microsoft Security Hub: Your space to connect

People talking around tables at a conference.

Throughout the week, the iconic Palace Hotel will serve as Microsoft’s central gathering place—a welcoming hub where you can step away from the bustle of the conference. It’s a space to recharge and connect with Microsoft security experts and executives, participate in focused thought leadership sessions and roundtable discussions, and take part in networking experiences designed to spark meaningful conversations. Full details on sessions and activities are available on the Microsoft Security Experiences at RSAC™ 2026 page.

Customers can also take advantage of scheduled one-on-one meetings with Microsoft security experts during the week. These meetings offer an opportunity to dig deeper into today’s threat landscape, discuss specific product questions, and explore strategies tailored to your organization. To schedule a one-on-one meeting with Microsoft executives and subject matter experts, speak with your account representative or submit a meeting request form.

Partners: Building security together

Microsoft’s presence at RSAC 2026 isn’t just about our technology. It’s about the ecosystem. Visit the booth and the Security Hub to meet members of the Microsoft Intelligent Security Association (MISA) and explore how our partners extend and enhance Microsoft Security solutions. From integrated threat intelligence to compliance automation, these collaborations help you build a stronger, more resilient security posture.

Special thanks to Ascent Solutions, Avertium, BlueVoyant, CyberProof, Darktrace, and Huntress for sponsoring the Microsoft Security Hub and karaoke party.

Why join us at RSAC?

Attending RSAC™ 2026? By engaging with Microsoft Security, you’ll gain clear perspective on how AI agents are reshaping risk and response, practical guidance to help you focus on what matters most, and meaningful connections with peers and experts facing the same challenges.

Together, we can make the world safer for all. Join us in San Francisco and be part of the conversation defining the next era of cybersecurity.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1According to data from the 2025 Work Trend Index, 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% say they expect agents to be moderately or extensively integrated into their company’s AI strategy in the next 12–18 months. At the same time, adoption on the ground is spreading but uneven: 24% of leaders say their companies have already deployed AI organization-wide, while just 12% remain in pilot mode.

2IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825

The post Your complete guide to Microsoft experiences at RSAC™ 2026 Conference appeared first on Microsoft Security Blog.

]]>