Microsoft Security Copilot Archives | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/product/microsoft-security-copilot/ Expert coverage of cybersecurity topics Thu, 09 Apr 2026 18:34:05 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 The agentic SOC—Rethinking SecOps for the next decade http://approjects.co.za/?big=en-us/security/blog/2026/04/09/the-agentic-soc-rethinking-secops-for-the-next-decade/ Thu, 09 Apr 2026 19:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=146282 In the SOC of the future, autonomous defense moves at machine speed, agents add context and coordination, and humans focus on judgment, risk, and outcomes.

The post The agentic SOC—Rethinking SecOps for the next decade appeared first on Microsoft Security Blog.

]]>
Every major shift in cyberattacker behavior over the past decade has followed a meaningful shift in how defenders operate. When security operation centers (SOCs) deployed endpoint detection and response (EDR)—and later extended detection and response (XDR)—security teams raised the bar, pushing cyberattackers beyond phishing, commodity malware, and perimeter‑based attacks and into cloud infrastructure built for scale and speed.

That pattern continued as defenders embraced automation and AI to manage expanding digital estates. SOCs were often early scale adopters—using machine learning to reduce noise, improve visibility, and respond faster across growing environments. Cyberattackers became more targeted and multistage, moving deliberately across identities, endpoints, cloud resources, and email, where detection was hardest. Success increasingly depended on moving fast enough to act before analysts could connect the dots. Even with this progress, security operations (SecOps) still feel asymmetrical: threat actors only need to be right once, while defenders are judged by every miss. If defense depends on human intervention to begin, defense will always feel asymmetrical.

To change the outcome, SOCs must change how defense itself works. This is the agentic SOC: where security delivers adaptive, autonomous defense, freeing defenders for strategic, high‑impact work. In this series, we’ll break down what that shift requires, what early experimentation has taught us, and where organizations can start today. Read more about how some organizations moving toward the agentic SOC and access a foundational roadmap for this transformation in our new whitepaper, The agentic SOC: Your teammate for tomorrow, today.

What we mean by “the agentic SOC”

At its core, the agentic SOC is an operating model that shifts security from reacting to incidents to anticipating how cyberattackers move—and actively reshaping the environment to cut off their paths.

It brings together a platform that can increasingly defend itself through built-in autonomous defense, with AI agents working alongside humans to accelerate investigation, prioritization, and action—so teams spend less time on execution and more time on judgment, risk, and the decisions that matter.

How does that change day-to-day work? Imagine a credential theft attempt. Built-in defenses automatically lock the affected account and isolate the compromised device within seconds—before lateral movement can begin. At the same time, an AI agent initiates an investigation, hunting for related activity across identity, endpoint, email, and cloud signals, and correlating everything into a single view.

When an analyst opens their queue, the “noise” of overwhelming alerts is already gone. Evidence has been pre-assembled. Likely next steps are suggested. The analyst can start right away by answering higher impact questions: Is this part of a broader campaign? Should this authentication method be hardened? Are there related techniques this cyberattacker commonly uses that the environment is still exposed to?

In today’s SOC, we see that sequence often takes hours—and the proactive improvement is very limited, if it ever happens; there’s simply not enough time. In an agentic SOC, it happens in minutes, and teams can spend the time they’ve gained on deeper investigation, systemic hardening, and reducing the likelihood of repeat cyberattacks.

A layered model for the agentic SOC

This model works because an agentic SOC is built on two distinct, but interdependent layers. The first is an underlying threat protection platform that has fundamentally evolved how cyberattacks are defended against and disrupted. High confidence cyberthreats are handled automatically through deterministic, policy-bound controls built directly into the platform. Known attack patterns are blocked in real time—without deliberation or creativity—shielding the environment from machine-speed cyberthreats before scarce human attention or token intensive reasoning is required. This disruption layer is not optional; it is the prerequisite that makes an agentic SOC safe, scalable, and sustainable.

The second layer operates at the operational level, where agents take on tough analysis and correlation work to dramatically increase the leverage of security teams and shift focus from uncovering insight to acting on it. These agents reason over evidence, coordinate investigations, orchestrate response across domains, and learn continuously from outcomes. Over time, they help identify recurring attack paths, surface gaps in posture, and recommend changes that make the environment harder to exploit—not just faster to respond.

Together, they transform the SOC from a reactive workflow engine into a resilient system.

What’s real now, and why there’s reason for optimism

The optimism around our view of the agentic SOC comes from operational discipline and proven, real-world impact. Autonomous attack disruption has been operating at scale for years.

Read more about how Microsoft Defender establishes confidence for automatic action.

Attacks like ransomware are disrupted in an average of three minutes, and tens of thousands of attacks are contained every month by isolating compromised users and devices before lateral movement can take hold. This all done with a 99.99% confidence rating, so SOC teams can trust in its efficacy.

Building on that proven foundation, newer capabilities like predictive shielding extend autonomous defense further—anticipating how cyberattacks are likely to progress and proactively restricting high-risk paths or assets during an intrusion.

Read the case study about how predictive shielding in Microsoft Defender stopped Group Policy Object (GPO) ransomware before it started

Together, these system-level protections show that platforms can safely intervene earlier in the cyberattack chain without introducing unnecessary disruption.

Agentic capabilities are also being similarly scoped. Internally, we’ve been testing task agents for triage and investigations under our expert supervision of our defenders. In live environments, these agents automate 75% of phishing and malware investigations. We’ve also tested agents on more complex analytical tasks, such as assessing exposure to specific vulnerabilities—work that once required a full day of engineering effort and can now be completed in less than an hour by an agent.

How day-to-day SOC work will change in the future

In an agentic SOC, the center of gravity will change for roles like an analyst. Fewer analysts are pulled into firefighting; more time is spent investigating how the organization is being targeted and what steps can be taken to reduce exposure. Within this new operating model, security teams will be freed to evolve the team structure and their day-to-day responsibilities.

Agentic systems increase demand for oversight, tuning, and governance. Detection and response engineering becomes more central, as teams design policies, confidence thresholds, and escalation paths. New roles emerge around supervising outcomes and refining system behavior over time.

Expertise becomes more valuable, not less. Judgment, context, and institutional knowledge are no longer consumed by repetitive tasks—they shape how the SOC operates at scale. And skilled practitioners closer to strategy, quality, and accountability.

To make this shift tangible, here’s how key roles are evolving:

  • Analysts: from triaging alerts to supervising outcomes. Analysts validate agent‑led investigations, determine when deeper inquiry is needed, focus on ambiguous cases, and guide system learning over time.
  • Detection engineers: from writing rules to teaching the system what matters. Engineers decide which signals are trustworthy, add the right context, and set confidence thresholds so detections can be acted on automatically—without human review every time.
  • Threat hunters: from manual queries to hypothesis-driven exploration. Hunters use AI to surface anomalies and focus on creative investigation and adversary simulation.
  • SOC leadership: from managing queues to orchestrating autonomy. Leaders define automation policies, oversee governance, and align AI actions with business risk.

Each shift reflects a broader truth: in the agentic SOC, people don’t do less—they do more of what matters.

The agentic SOC journey

This is a significant change in how security teams operate, and it doesn’t happen overnight. Based on our own experience, we’ve outlined a maturity model that shows how organizations can progress toward an agentic SOC over time.

Organizations begin by establishing a trusted foundation that unifies security tooling, enables the deployment of autonomous defense and begins unifying security signal in earnest. From there, they introduce agents to take on bounded, high-volume work under human supervision, learning where automation adds leverage and where judgment still matters most. Over time, as confidence, governance, and operational discipline mature, agents expand from assisting individual workflows to coordinating broader security outcomes. At every stage, progress is measured not by how much work is automated, but by how effectively human expertise is amplified.

A horizontal gradient graphic transitioning from blue to purple shows a three-stage SOC maturity journey connected by a curved line, with labeled milestones reading “SOC I: Unify your platform foundation,” “SOC II: Accelerate operations with generative AI,” and “SOC III: Deploy agentic automation.”

SOC 1—Unify your platform foundation

The shift begins with a unified security platform that enables autonomous defense. Deterministic, policy-bound protections stop high confidence cyberthreats automatically—removing urgency, reducing blast radius, and eliminating the constant context switching that slows human response. By integrating signals across identity, endpoints, and cloud, defenders gain a shared view of cyberattacks instead of stitching evidence together across tools. This foundation is what makes cross-domain action possible—and separates experimental automation from production-ready operations.

SOC 2—Accelerate operations with generative AI and task agents

With urgency reduced, generative AI changes how work flows through the SOC. Instead of pushing alerts forward, AI assembles context, synthesizes signals across domains, and produces coherent investigations. Repetitive, high-volume tasks like triage, correlation, and basic investigation are absorbed by the system, allowing analysts to focus on higher impact decisions. This stage establishes new operational patterns where humans and AI work together—accelerating response while preserving judgment and accountability.

SOC 3—Deploy agentic automation

As trust grows, agents move from assistance to action. Specialized agents autonomously orchestrate specific tasks—containing compromised identities, isolating devices, or remediating reported phishing—while humans shift into supervisory roles. Over time, agents help identify patterns, anticipate attack paths, and optimize defenses across the environment. Security teams spend less time managing queues and more time shaping posture, risk, and outcomes. These shifts compound across all three stages.

What comes next for the SOC evolution?

We believe the strongest agentic SOC models will begin with autonomous defense—deterministic, policy‑bound actions that safely stop what is already known to be dangerous at machine speed. That foundation removes urgency, noise, and latency from security operations.

Additionally, agents and humans work differently. Agents assemble context, coordinate remediation, and optimize how the SOC operates. Humans provide intent, judgment, and accountability—turning time saved into smarter, more strategic security outcomes.

This is the first of a series of posts that will explore what makes the agentic SOC model real: the platform foundations required to defend autonomously, the governance and trust mechanisms that keep autonomy safe, and the adoption journey organizations take to get there. Some organizations are already rebuilding their businesses around AI, a new class of Frontier Firms. Read more about how they’re making their move toward the agentic SOC and access a foundational roadmap for this transformation in our new whitepaper, The agentic SOC: Your teammate for tomorrow, today.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post The agentic SOC—Rethinking SecOps for the next decade appeared first on Microsoft Security Blog.

]]>
Identity security is the new pressure point for modern cyberattacks http://approjects.co.za/?big=en-us/security/blog/2026/03/25/identity-security-is-the-new-pressure-point-for-modern-cyberattacks/ Wed, 25 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145937 Read the latest Microsoft Secure Access report for insights into why a unified identity and access strategy offers strong modern protection.

The post Identity security is the new pressure point for modern cyberattacks appeared first on Microsoft Security Blog.

]]>
Identity attacks no longer hinge on who a cyberattacker compromises, but on what that identity can access. As organizations manage growing numbers of human, non-human, and agentic identities, their access fabric multiplies across apps, resources, and environments, which increases both operational complexity for identity teams and risk exposure for security teams.

Redefining identity security for the modern enterprise

Read the blog ↗

The challenge isn’t just scale, it’s fragmentation. From our latest Secure Access report, research shows that 32% of organizations say their access management solutions are duplicative, and 40% say they have too many different vendors. That fragmentation for security vendors makes it harder to maintain consistent access controls and correlate risk across identities. When risk is distributed across dozens of disconnected accounts and permissions, visibility fragments and blind spots emerge—creating ideal conditions for cyberattackers to move laterally without detection. Securing identity in this reality requires more than incremental improvements. It calls for a shift from fragmented controls to an integrated, end-to-end approach that treats identity as a shared control plane that is informed by a continuous, foundational security signal.

Why fragmentation fails—and what must replace it

With the traditional model of identity security—built on siloed directories, disconnected access policies, and bolt-on threat detection—cyberattackers don’t have to break defenses, they just move between them. Permissions go uncorrelated, access policies drift as environments evolve, and lateral movement hides in the gaps.

What is a Security Operations Center?

Learn more ↗

For defenders, this creates a dangerous imbalance. Identity signals flood the security operations center (SOC) without the context to act, while identity teams enforce access without visibility into active cyberthreats. Risk accumulates across systems, but responsibility—and insight—remains fragmented.

Fixing this doesn’t require more alerts or point solutions. It requires an integrated fabric that brings together all of the identities, access, and signals.

A modern identity security solution must unify three critical layers:

  • The identity infrastructure: The systems and services that underpin every access decision. This includes the identity provider, authentication services, single sign-on (SSO), user and group management, and the systems that establish and maintain trust across the enterprise. Without this foundation, there is no authoritative source of truth for who an identity is, what it can access, or how it should be governed. It’s the layer many security vendors lack—and the one Microsoft delivers at global scale.
  • The identity control plane: Where privileged identity management and access decisions are enforced in real time, based on dynamic risk signals, behavioral context, and policy intent. This is where identity and security converge to adapt access as conditions change, powering real-time response to identity threats.
  • End-to-end identity threat protection: Before a cyberattack, it proactively reduces posture risk by eliminating excessive access and closing identity exposure gaps. When threats emerge, it detects identity misuse in real time, surfaces lateral movement, and drives rapid containment—connecting integrated signals and response across the full attack lifecycle.

When these layers operate in isolation, risk is missed. When they operate as one, identity becomes a powerful security signal—enabling earlier detection, smarter decisions, and faster response.

Redefining identity security for real-time defense

Microsoft is delivering a new standard for identity security solution—one that unifies identity infrastructure, access control, and threat response into a single, real-time platform built for speed, precision, and autonomy.

We start with the identity infrastructure: the foundational identity layer powered by Microsoft Entra. As one of the most widely adopted identity platforms in the world with billions of authentications managed daily, it provides resilient SSO, user and group management, and trust establishment at global scale—a layer many security vendors simply don’t have access to.

We collapse identity sprawl, correlating related accounts across cloud and on-premises into a single identity view, so risk assessment is no longer scattered across disconnected systems. This gives security teams a real‑time understanding of what an identity and its correlated accounts can access, not just who it is—allowing them to spot dangerous access paths early, limit impact, and disrupt lateral movement before attackers turn access into impact. Likewise, it gives identity teams visibility into whether a user flagged as a high risk was just a one-off or if its associated with other accounts, informing what access decisions to make.

On top of that foundation is a real-time identity control plane designed for how attacks actually unfold. Microsoft Entra Conditional Access continuously evaluates risk as access is used, not just when it’s granted—tracking signals from identity, device, network, and broader threat intelligence throughout the session. As conditions change, access adapts in real time, helping identity teams limit exposure and prevent risky access while giving security teams the ability to interrupt attack paths while activity is still in motion. This is adaptive access driven by connected intelligence—not static policy.

And when risk turns into a threat, we act—automatically and inline, which results in a faster response. Microsoft’s threat protection is differentiated by automatic attack disruption: a capability that intervenes mid-attack to isolate compromised assets by terminating user sessions, revoking access, and applying just-in-time hardening to stop lateral movement and privilege escalation. It’s not just detection—it’s defense in motion.

To accelerate response, we’ve extended Microsoft Security Copilot’s triage agent to identity. It uses AI to filter noise, surface high-confidence alerts, and guide analysts with clear, explainable insights—reducing time to action and analyst fatigue.

This end-to-end approach shifts identity from an expanding source of exposure into a strategic advantage. Instead of reacting after access has already been abused, it helps ensure that risk is evaluated continuously, access decisions are made in real-time, and organizations can defend more effectively as attack paths emerge to stop identity‑based attacks before they escalate into business impact.

Innovation that moves the industry forward

At RSAC 2026, we announced a set of innovations in identity security that are designed to help organizations move from fragmented awareness to confident, identity-centric protection:

  • The new identity security dashboard in Microsoft Defender doesn’t just summarize alerts, it reveals where identity risk actually concentrates across human and nonhuman identities, account types, and providers. Instead of hopping between consoles, teams can immediately see which access paths matter most, where blast radius is largest, and where action will have the greatest impact.
  • A new unified identity risk score correlates together more than 100 trillion signals across Microsoft Security including identity behavior, access risk, and threat signals into a single, actionable view of risk. This allows teams to move directly from understanding exposure to enforcing protection—applying controls at the point of access, natively through risk-based Conditional Access policies.
  • Adaptive risk remediation helps identity and security teams contain modern cyberattacks more efficiently while maintaining strong protection. When risk is detected, users easily regain access and Microsoft Entra ID Protection adapts risk remediation based on the type of cyberthreat and the credentials used. This reduces reliance on help desk processes and lowers manual response effort.
  • Automatic attack disruption fundamentally changes the outcome of identity-based attacks. Instead of detecting suspicious behavior and waiting for the security teams to respond, it intervenes while cyberattacks are in progress—terminating sessions, revoking access, and applying just-in-time hardening to shut down cyberattacker movement before lateral spread or privilege escalation can occur.
  • Security Copilot’s triage agent now extends to identity. Using AI to collapse signal overload into clear, recommended action, the agent surfaces high confidence threats, explaining why they matter, and guides analysts to the right response while attacks are still unfolding. The result is faster containment with far less analyst fatigue.
  • Expanded coverage across the modern identity fabric, including deeper visibility into non-human identities and new integrations with third-party platforms like SailPoint and CyberArk—providing protection that spans the full ecosystem, not just first-party assets.
  • A new coverage and maturity view helps organizations assess their current identity security posture, identify gaps, and prioritize next steps—transforming identity protection from a static checklist into a dynamic, guided journey.

These innovations are deeply integrated, continuously reinforced, and designed to work together—enabling security and identity teams to operate from a shared source of truth, with shared context, and shared urgency. Read more about redefining identity security for the modern enterprise.

They are designed to help organizations shift from reactive identity management to proactive identity defense—and from fragmented tools to a unified platform built for real-time security across human, non-human, and agentic identities.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Identity security is the new pressure point for modern cyberattacks appeared first on Microsoft Security Blog.

]]>
CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents http://approjects.co.za/?big=en-us/security/blog/2026/03/20/cti-realm-a-new-benchmark-for-end-to-end-detection-rule-generation-with-ai-agents/ Fri, 20 Mar 2026 16:19:00 +0000 Excerpt: CTI-REALM is Microsoft’s open-source benchmark for evaluating AI agents on real-world detection engineering—turning cyber threat intelligence (CTI) into validated detections.

The post CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents appeared first on Microsoft Security Blog.

]]>
Excerpt: CTI-REALM is Microsoft’s open-source benchmark for evaluating AI agents on real-world detection engineering—turning cyber threat intelligence (CTI) into validated detections. Instead of measuring “CTI trivia,” CTI-REALM tests end-to-end workflows: reading threat reports, exploring telemetry, iterating on KQL queries, and producing Sigma rules and KQL-based detection logic that can be scored against ground truth across Linux, AKS, and Azure cloud environments.


Security is Microsoft’s top priority. Every day, we process more than 100 trillion security signals across endpoints, cloud infrastructure, identity, and global threat intelligence. That’s the scale modern cyber defense demands, and AI is a core part of how we protect Microsoft and our customers worldwide. At the same time, security is, and always will be, a team sport.

That’s why Microsoft is committed to AI model diversity and to helping defenders apply the latest AI responsibly. We created CTI‑REALM and open‑sourced it so the broader industry can test models, write better code, and build more secure systems together.


CTI-REALM (Cyber Threat Real World Evaluation and LLM Benchmarking) is Microsoft’s open-source benchmark that evaluates AI agents on end-to-end detection engineering. Building on work like ExCyTIn-Bench, which evaluates agents on threat investigation, CTI-REALM extends the scope to the next stage of the security workflow: detection rule generation. Rather than testing whether a model can answer CTI trivia or classify techniques in isolation, CTI-REALM places agents in a realistic, tool-rich environment and asks them to do what security analysts do every day: read a threat intelligence report, explore telemetry, write and refine KQL queries, and produce validated detection rules.

We curated 37 CTI reports from public sources (Microsoft Security, Datadog Security Labs, Palo Alto Networks, and Splunk), selecting those that could be faithfully simulated in a sandboxed environment and that produced telemetry suitable for detection rule development. The benchmark spans three platforms: Linux endpoints, Azure Kubernetes Service (AKS), and Azure cloud infrastructure with ground-truth scoring at every stage of the analytical workflow.

Why CTI-REALM exists

Existing cybersecurity benchmarks primarily test parametric knowledge: can a model name the MITRE technique behind a log entry, or classify a TTP from a report? These are useful signals. However, they miss the harder question: can an agent operationalize that knowledge into detection logic that finds attacks in production telemetry?

No current benchmark evaluates this complete workflow. CTI-REALM fills that gap by measuring:

  • Operationalization, not recall: Agents must translate narrative threat intelligence into working Sigma rules and KQL queries, validated against real attack telemetry.
  • The full workflow: Scoring captures intermediate decision quality—CTI report selection, MITRE technique mapping, data source identification, iterative query refinement. Scoring is not just limited to the final output.
  • Realistic tooling: Agents use the same types of tools security analysts rely on: CTI repositories, schema explorers, a Kusto query engine, MITRE ATT&CK and Sigma rule databases.

Business Impact

CTI-REALM gives security engineering leaders a repeatable, objective way to prove whether an AI model improves detection coverage and analyst output.

Traditional benchmarks tend to provide a single aggregate score where a model either passes or fails but doesn’t always tell the team why. CTI-REALM’s checkpoint-based scoring answers this directly. It reveals whether a model struggles with CTI comprehension, query construction, or detection specificity. This helps teams make informed decisions about where human review and guardrails are needed.

Why CTI-REALM matters for business

  • Measures operationalization, not trivia: Focuses on translating narrative threat intel into detection logic that can be validated against ground truth.
  • Captures the workflow: Evaluates intermediate steps (e.g., technique extraction, telemetry identification, iterative refinement) in addition to the final rule quality.
  • Supports safer adoption: Helps teams benchmark models before considering any downstream use and reinforces the need for human review before operational deployment.

Latest results

We evaluated multiple frontier model configurations on CTI-REALM-50 (50 tasks spanning all three platforms).

We recently evaluated Anthropic’s Claude Mythos Preview (early snapshot) with our open-source benchmark, CTI-REALM. The results show a substantial improvement in performance compared to other evaluated agentic security benchmarks results.

What the numbers tell us

  • Anthropic models lead across the board. Claude occupies the top three positions (0.624–0.685), driven by significantly stronger tool-use and iterative query behavior compared to OpenAI models.
  • More reasoning isn’t always better. Within the GPT-5 family, medium reasoning consistently beats high across all three generations, suggesting overthinking hurts in agentic settings.
  • Cloud detection is the hardest problem. Performance drops sharply from Linux (0.585) to AKS (0.517) to Cloud (0.282), reflecting the difficulty of correlating across multiple data sources in APT-style scenarios.
  • CTI tools matter. Removing CTI-specific tools degraded every model’s output by up to 0.150 points, with the biggest impact on final detection rule quality rather than intermediate steps.
  • Structured guidance closes the gap. Providing a smaller model with human-authored workflow tips closed about a third of the performance gap to a much larger model, primarily by improving threat technique identification.

For complete details around techniques and results, please refer to the paper here: [2603.13517] CTI-REALM: Benchmark to Evaluate Agent Performance on Security Detection Rule Generation Capabilities.

Get involved

CTI-REALM is open-source and free to access. CTI-REALM will be available on the Inspect AI repo soon. You can access it here: UKGovernmentBEIS/inspect_evals: Collection of evals for Inspect AI.

Model developers and security teams are invited to contribute, benchmark, and share results via the official GitHub repository. For questions or partnership opportunities, reach out to the team at msecaimrbenchmarking@microsoft[.]com.

CTI-REALM helps teams evaluate whether an agent can reliably turn threat intelligence into detections before relying on it in security operations.

References

  1. Microsoft raises the bar: A smarter way to measure AI for cybersecurity | Microsoft Security Blog
  2. [2603.13517] CTI-REALM: Benchmark to Evaluate Agent Performance on Security Detection Rule Generation Capabilities
  3. CTI-REALM: Cyber Threat Intelligence Detection Rule Development Benchmark by arjun180-new · Pull Request #1270 · UKGovernmentBEIS/inspect_evals

The post CTI-REALM: A new benchmark for end-to-end detection rule generation with AI agents appeared first on Microsoft Security Blog.

]]>
Secure agentic AI end-to-end http://approjects.co.za/?big=en-us/security/blog/2026/03/20/secure-agentic-ai-end-to-end/ Fri, 20 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145742 In this agentic era, security must be woven into, and around, every layer of the AI estate. At RSAC 2026, we are delivering on that vision with new purpose-built capabilities designed to help organizations secure agents, secure their foundations, and defend using agents and experts.

The post Secure agentic AI end-to-end appeared first on Microsoft Security Blog.

]]>
Next week, RSAC™ Conference celebrates its 35-year anniversary as a forum that brings the security community together to address new challenges and embrace opportunities in our quest to make the world a safer place for all. As we look towards that milestone, agentic AI is reshaping industries rapidly as customers transform to become Frontier Firms—those anchored in intelligence and trust and using agents to elevate human ambition, holistically reimagining their business to achieve their highest aspirations. Our recent research shows that 80% of Fortune 500 companies are already using agents.1

At the same time, this innovation is happening against a sea change in AI-powered attacks where agents can become “double agents.” And chief information officers (CIOs), chief information security officers (CISOs), and security decision makers are grappling with the resulting security implications: How do they observe, govern, and secure agents? How do they secure their foundations in this new era? How can they use agentic AI to protect their organization and detect and respond to traditional and emerging threats?

The answer starts with trust, and security has always been the root of trust. In this agentic era, security must be woven into, and around, every layer of the AI estate. It must be ambient and autonomous, just like the AI it protects. This is our vision for security as the core primitive of the AI stack.

At RSAC 2026, we are delivering on that vision with new purpose-built capabilities designed to help organizations secure agents, secure their foundations, and defend using agents and experts. Fueled by more than 100 trillion daily signals, Microsoft Security helps protect 1.6 million customers, one billion identities, and 24 billion Copilot interactions.2 Read on to learn how we can help you secure agentic AI.

Secure agents

Earlier this month, we announced that Agent 365 will be generally available on May 1. Agent 365—the control plane for agents—gives IT, security, and business teams the visibility and tools they need to observe, secure, and govern agents at scale using the infrastructure you already have and trust. It includes new Microsoft Defender, Entra, and Purview capabilities to help you secure agent access, prevent data oversharing, and defend against emerging threats.

Agent 365 is included in Microsoft 365 E7: The Frontier Suite along with Microsoft 365 Copilot, Microsoft Entra Suite, and Microsoft 365 E5, which includes many of the advanced Microsoft Security capabilities below to deliver comprehensive protection for your organization.

Secure your foundations

Along with securing agents, we also need to think of securing AI comprehensively. To truly secure agentic AI, we must secure foundations—the systems that agentic AI is built and runs on and the people who are developing and using AI. At RSAC 2026, we are introducing new capabilities to help you gain visibility into risks across your enterprise, secure identities with continuous adaptive access, safeguard sensitive data across AI workflows, and defend against threats at the speed and scale of AI.

Gain visibility into risks across your enterprise

As AI adoption accelerates, so does the need for comprehensive and continuous visibility into AI risks across your environment—from agents to AI apps and services. We are addressing this challenge with new capabilities that give you insight into risks across your enterprise so you know where AI is showing up, how it is being used, and where your exposure to risk may be growing. New capabilities include:

  • Security Dashboard for AI provides CISOs and security teams with unified visibility into AI-related risk across the organization. Now generally available.
  • Entra Internet Access Shadow AI Detection uses the network layer to identify previously unknown AI applications and surface unmanaged AI usage that might otherwise go undetected. Generally available March 31.
  • Enhanced Intune app inventory provides rich visibility into your app estate installed on devices, including AI-enabled apps, to support targeted remediation of high-risk software. Generally available in May.

Secure identities with continuous, adaptive access

Identity is the foundation of modern security, the most targeted layer in any environment, and the first line of defense. With Microsoft Entra, you can secure access and deliver comprehensive identity security using new capabilities that help you harden your identity infrastructure, improve tenant governance, modernize authentication, and make intelligent access decisions.

  • Entra Backup and Recovery strengthens resilience with an automated backup of Entra directory objects to enable rapid recovery in case of accidental data deletion or unauthorized changes. Now available in preview.
  • Entra Tenant Governance helps organizations discover unmanaged (shadow) Entra tenants and establish consistent tenant policies and governance in multi-tenant environments. Now available in preview.
  • Entra passkey capabilities now include synced passkeys and passkey profiles to enable maximum flexibility for end-users, making it easy to move between devices, while organizations looking for maximum control still have the option of device-bound passkeys. Plus, Entra passkeys are now natively integrated into the Windows Hello experience, making phishing-resistant passkey authentication more seamless on Windows devices. Synced passkeys and passkey profiles are generally available, passkey integration into Windows Hello is in preview. 
  • Entra external Multi-Factor Authentication (MFA) allows organizations to connect external MFA providers directly with Microsoft Entra so they can leverage pre-existing MFA investments or use highly specialized MFA methods. Now generally available.
  • Entra adaptive risk remediation helps users securely regain access without help-desk friction through automatic self-remediation across authentication methods, adapting to where they are in their modern authentication journey. Generally available in April.
  • Unified identity security provides end-to-end coverage across identity infrastructure, the identity control plane, and identity threat detection and response (ITDR)—built for rapid response and real-time decisions. The new identity security dashboard in Microsoft Defender highlights the most impactful insights across human and non-human identities to help accelerate response, and the new identity risk score unifies account-level risk signals to deliver a comprehensive view of user risk to inform real-time access decisions and SecOps investigations. Now available in preview.

Safeguard sensitive data across AI workflows

With AI embedded in everyday work, sensitive data increasingly moves through prompts, responses, and grounding flows—often faster than policies can keep up. Security teams need visibility into how AI interacts with data as well as the ability to stop data oversharing and data leakage. Microsoft brings data security directly into the AI control plane, giving organizations clear insight into risk, real-time enforcement at the point of use, and the confidence to enable AI responsibly across the enterprise. New Microsoft Purview capabilities include:

  • Expanded Purview data loss prevention for Microsoft 365 Copilot helps block sensitive information such as PII, credit card numbers, and custom data types in prompts from being processed or used for web grounding. Generally available March 31.
  • Purview embedded in Copilot Control System provides a unified view of AI‑related data risk directly in the Microsoft 365 Admin Center. Generally available in April.
  • Purview customizable data security reports enable tailored reporting and drilldowns to prioritized data security risks. Available in preview March 31.

Defend against threats across endpoints, cloud, and AI services

Security teams need proactive 24/7 threat protection that disrupts threats early and contains them automatically. Microsoft is extending predictive shielding to proactively limit impact and reduce exposure, expanding our container security capabilities, and introducing network-layer protection against malicious AI prompts.

  • Entra Internet Access prompt injection protection helps block malicious AI prompts across apps and agents by enforcing universal network-level policies. Generally available March 31.
  • Enhanced Defender for Cloud container security includes binary drift and antimalware prevention to close gaps attackers exploit in containerized environments. Now available in preview.
  • Defender for Cloud posture management adds broader coverage and supports Amazon Web Services and Google Cloud Platform, delivering security recommendations and compliance insights for newly discovered resources. Available in preview in April.
  • Defender predictive shielding dynamically adjusts identity and access policies during active attacks, reducing exposure and limiting impact. Now available in preview.

Defend with agents and experts

To defend in the agentic age, we need agentic defense. This means having an agentic defense platform and security agents embedded directly into the flow of work, augmented by deep human expertise and comprehensive security services when you need them.

Agents built into the flow of security work

Security teams move fastest with targeted help where and when work is happening. As alerts surface and investigations unfold across identities, data, endpoints, and cloud workloads, AI-powered assistance needs to operate alongside defenders. With Security Copilot now included in Microsoft 365 E5 and E7, we are empowering defenders with agents embedded directly into daily security and IT operations that help accelerate response and reduce manual effort so they can focus on what matters most.

New agents available now include:

  • Security Analyst Agent in Microsoft Defender helps accelerate threat investigations by providing contextual analysis and guided workflows. Available in preview March 26.
  • Security Alert Triage Agent in Microsoft Defender has the capabilities of the phishing triage agent and then extends to cloud and identity to autonomously analyze, classify, prioritize, and resolve repetitive low-value alerts at scale. Available in preview in April.
  • Conditional Access Optimization Agent in Microsoft Entra enhancements add context-aware recommendations, deeper analysis, and phased rollout to strengthen identity security. Agent generally available, enhancements now available in preview.
  • Data Security Posture Agent in Microsoft Purview enhancements include a credential scanning capability that can be used to proactively detect credential exposure in your data. Now available in preview.
  • Data Security Triage Agent in Microsoft Purview enhancements include an advanced AI reasoning layer and improved interpretation of custom Sensitive Information Types (SITs), to improve agent outputs during alert triage. Agent generally available, enhancements available in preview March 31.
  • Over 15 new partner-built agents extend Security Copilot with additional capabilities, all available in the Security Store.

Scale with an agentic defense platform

To help defenders and agents work together in a more coordinated, intelligence-driven way, Microsoft is expanding Sentinel, the agentic defense platform, to unify context, automate end-to-end workflows, and standardize access, governance, and deployment across security solutions.

  • Sentinel data federation powered by Microsoft Fabric investigates external security data in place in Databricks, Microsoft Fabric, and Azure Data Lake Storage while preserving governance. Now available in preview.
  • Sentinel playbook generator with natural language orchestration helps accelerate investigations and automate complex workflows. Now available in preview.
  • Sentinel granular delegated administrator privileges and unified role-based access control enable secure and scaling management for partners and enterprise customers with cross-tenant collaboration. Now available in preview.
  • Security Store embedded in Purview and Entra makes it easier to discover and deploy agents directly within existing security experiences. Generally available March 31.
  • Sentinel custom graphs powered by Microsoft Fabric enable views unique to your organization of relationships across your environment. Now available in preview.
  • Sentinel model context protocol (MCP) entity analyzer helps automate faster with natural language and harnesses the flexibility of code to accelerate responses. Generally available in April.

Strengthen with experts

Even the most mature security organizations face moments that call for deeper partnership—a sophisticated attack, a complex investigation, a situation where seasoned expertise alongside your team makes all the difference. The Microsoft Defender Experts Suite brings together expert-led services—technical advisory, managed extended detection and response (MXDR), and end-to-end proactive and reactive incident response—to help you defend against advanced cyber threats, build long-term resilience, and modernize security operations with confidence.

Apply Zero Trust for AI

Zero Trust has always been built on three principles: verify explicitly, use least privilege, and assume breach. As AI becomes embedded across your entire environment—from the models you build on, to the data they consume, to the agents that act on your behalf—applying those principles has never been more critical. At RSAC 2026, we’re extending our Zero Trust architecture, the full AI lifecycle—from data ingestion and model training to deployment agent behavior. And we’re making it actionable with an updated Zero Trust for AI reference architecture, workshop, assessment tool, and new patterns and practices articles to help you improve your security posture.

See you at RSAC

If you’re joining the global security community in San Francisco for RSAC 2026 Conference, we invite you to connect with us. Join us at our Microsoft Pre-Day event and stop by our booth at the RSAC Conference North Expo (N-5744) to explore our latest innovations across Microsoft Agent 365, Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft Sentinel, and Microsoft Security Copilot and see firsthand how we can help your organization secure agents, secure your foundation, and help you defend with agents and experts. The future of security is ambient, autonomous, and built for the era of AI. Let’s build it together.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Based on Microsoft first-party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

2Microsoft Fiscal Year 2026 First Quarter Earnings Conference Call and Microsoft Fiscal Year 2026 Second Quarter Earnings Conference Call

The post Secure agentic AI end-to-end appeared first on Microsoft Security Blog.

]]>
Observability for AI Systems: Strengthening visibility for proactive risk detection http://approjects.co.za/?big=en-us/security/blog/2026/03/18/observability-ai-systems-strengthening-visibility-proactive-risk-detection/ Wed, 18 Mar 2026 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145844 As AI systems grow more autonomous, observability becomes essential. Learn how visibility into AI behavior helps detect risk and strengthen secure development.

The post Observability for AI Systems: Strengthening visibility for proactive risk detection appeared first on Microsoft Security Blog.

]]>
Adoption of Generative AI (GenAI) and agentic AI has accelerated from experimentation into real enterprise deployments. What began with copilots and chat interfaces has quickly evolved into powerful business systems that autonomously interact with sensitive data, call external APIs, connect to consequential tools, initiate workflows, and collaborate with other agents across enterprise environments. As these AI systems become core infrastructure, establishing clear, continuous visibility into how these systems behave in production can help teams detect risk, validate policy adherence, and maintain operational control.

Observability is one of the foundational security and governance requirements for AI systems operating in production. Yet many organizations don’t understand the critical importance of observability for AI systems or how to implement effective AI observability. That mismatch creates potential blind spots at precisely the moment when visibility matters most.

In February, Microsoft Corporate Vice President and Deputy Chief Information Security Officer, Yonatan Zunger, blogged about expanding Microsoft’s Secure Development Lifecycle (SDL) to address AI-specific security concerns. Today, we continue the discussion with a deep dive into observability as a necessity for the secure development of GenAI and agentic AI systems.

For additional context, read the Secure Agentic AI for Your Frontier Transformation blog that covers how to manage agent sprawl, strengthen identity controls, and improve governance across your tenant.

Observability for AI systems

In traditional software, client apps make structured API calls and backend services execute predefined logic. Because code paths follow deterministic flows, traditional observability tools can surface straightforward metrics like latency, errors, and throughput to track software performance in production.

GenAI and agentic AI systems complicate this model. AI systems are probabilistic by design and make complex decisions about what to do next as they run. This makes relying on predictable finite sets of success and failure modes much more difficult. We need to evolve the types of signals and telemetry collected so that we can accurately understand and govern what is happening in an AI system.

Consider this scenario: an email agent asks a research agent to look up something on the web. The research agent fetches a page containing hidden instructions and passes the poisoned content back to the email agent as trusted input. The email agent, now operating under attacker influence, forwards sensitive documents to unauthorized recipients, resulting in data exfiltration.

In this example, traditional health metrics stay green: no failures, no errors, no alerts. The system is working exactly as designed… except a boundary between untrusted external content and trusted agent context has been compromised.

This illustrates how AI systems require a unique approach to observability. Without insights into how context was assembled at each step—what was retrieved, how it impacted model behavior, and where it propagated across agents—there is no way to detect the compromise or reconstruct what occurred.

Traditional monitoring, built around uptime, latency, and error rates, can miss the root cause here and provide limited signal for attribution or reconstruction in AI-related scenarios. This is an example of one of the new categories of risk that the SDL must now account for, and it is why Microsoft has incorporated enhanced AI observability practices within our secure development practices.

Traditional observability versus AI observability

Observability of AI systems means the ability to monitor, understand, and troubleshoot what an AI system is doing, end-to-end, from development and evaluation to deployment and operation. Traditional services treat inputs as bounded and schema-defined. In AI systems, input is assembled context. This includes natural language instructions plus whatever the system pulls in and acts on, such as system and developer instructions, conversation history, outputs returned from tools, and retrieved content (web pages, emails, documents, tickets).

For AI observability, context is key: capture which input components were assembled for each run, including source provenance and trust classification, along with the resulting system outputs.

Traditional observability is often optimized for request-level correlation, where a single request maps cleanly to a single outcome, with correlation captured inside one trace. In AI systems, dangerous failures can unfold across many turns. Each step looks harmless until the conversation ramps into disallowed output, as we’ve seen in multi-turn jailbreaks like Crescendo.

For AI observability, best practices call for propagating a stable conversation identifier across turns, preserving trace context end-to-end, so outcomes can be understood within the full conversational narrative rather than in isolation. This is “agent lifecycle-level correlation,” where the span of correlation should be the same as the span of persistent memory or state within the system.

Defining AI system observability

Traditional observability is built on logs, metrics, and traces. This model works well for conventional software because it’s optimized around deterministic, quantifiable infrastructure and service behavior such as availability, latency, throughput, and discrete errors.

AI systems aren’t deterministic. They evaluate natural language inputs and return probabilistic results that can differ subtly (or significantly) from execution to execution. Logs, metrics, and traces still apply here, but what gets captured within them is different. Observability for AI systems updates traditional observability to capture AI-native signals.

Logs, metrics, and traces indicate what happened in the AI system at runtime.

  • Logs capture data about the interaction: request identity context, timestamp, user prompts and model responses, which agents or tools were invoked, which data sources were consulted, and so on. This is the core information that tells you what happened. User prompts and model responses are often the earliest signal of novel attacks before signatures exist, and are essential for identifying multi-turn escalation, verifying whether attacks changed system behavior, adjudicating safety detections, and reconstructing attack paths. User-prompt and model-response logs can reveal the exact moment an AI agent stops following user intent and starts obeying attacker-authored instructions from retrieved content.
  • Metrics measure traditional performance details like latency, response times, and errors as well as AI-specific information such as token usage, agent turns, and retrieval volume. This information can reveal issues such as unauthorized usage or behavior changes due to model updates.
  • Traces capture the end-to-end journey of a request as an ordered sequence of execution events, from the initial prompt through response generation. Without traces, debugging an agent failure means guessing which step went wrong.

AI observability also incorporates two new core components: evaluation and governance.

  • Evaluation measures response quality, assesses whether outputs are grounded in source material, and evaluates whether agents use tools correctly. Evaluation gives teams measurable signals to help understand agent reliability, instruction alignment, and operational risk over time.
  • Governance is the ability to measure, verify, and enforce acceptable system behavior using observable evidence. Governance uses telemetry and control plane mechanisms to ensure that the system supports policy enforcement, auditability, and accountability.

These key components of observability give teams improved oversight of AI systems, helping them ship with greater confidence, troubleshoot faster, and tune quality and cost over time.  

Operationalizing AI observability through the SDL

The SDL provides a formal mechanism by which technology leaders and product teams can operationalize observability. The following five steps can help teams implement observability in their AI development workflows.

  1. Incorporate AI observability into your secure development standards. Observability standards for GenAI and agentic AI systems should be codified requirements within your development lifecycle; not discretionary practices left to individual teams.
  2. Instrument from the start of development. Build AI-native telemetry into your system at design time, not after release. Aligning with industry conventions for logging and tracing, such as OpenTelemetry (OTel) and its GenAI semantic conventions, can improve consistency and interoperability across frameworks. For implementation in agentic systems, use platform-native capabilities such as Microsoft Foundry agent tracing (in preview) for runtime trace diagnostics in Foundry projects. For Microsoft Agent 365 integrations, use the OTel-based Microsoft Agent 365 Observability SDK (in Frontier preview) to emit telemetry into Agent 365 governance workflows.
  3. Capture the full context. Log user prompts and model responses, retrieval provenance, what tools were invoked, what arguments were passed, and what permissions were in effect. This detail can help security teams distinguish a model error from an exploited trust boundary and enables end-to-end forensic reconstruction. What to capture and retain should be governed by clear data contracts that balance forensic needs against privacy, data residency, retention requirements, and compliance with legal and regulatory obligations, with access controls and encryption aligned to enterprise policy and risk assessments.
  4. Establish behavioral baselines and alert on deviation. Capture normal patterns of agent activity—tool call frequencies, retrieval volumes, token consumption, evaluation score distributions—through Azure Monitor and Application Insights or similar services. Alert on meaningful departures from those baselines rather than relying solely on static error thresholds.
  5. Manage enterprise AI agents. Observability alone cannot answer every question. Technology leaders need to know how many AI agents are running, whether those agents are secure, and whether compliance and policy enforcement are consistent. Observability, when coupled with unified governance, can support improved operational control. Microsoft Foundry Control Plane, for example, consolidates inventory, observability, compliance with organization-defined AI guardrail policies, and security into one role-aware interface; Microsoft Agent 365 (in Frontier preview) provides tenant-level governance in the Microsoft 365 admin plane.

To learn more about how Microsoft can help you manage agent sprawl, strengthen identity controls, and improve governance across your tenant, read the Secure Agentic AI for Your Frontier Transformation blog.

Benefits for security teams

Making enterprise AI systems observable transforms opaque model behavior into actionable security signals, strengthening both proactive risk detection and reactive incident investigation.

When embedded in the SDL, observability becomes an engineering control. Teams define data contracts early, instrument during design and build, and verify before release that observability is sufficient for detection and incident response. Security testing can then validate that key scenarios such as indirect prompt injection or tool-mediated data exfiltration are surfaced by runtime protections and that logs and traces enable end-to-end forensic reconstruction of event paths, impact, and control decisions.  

Many organizations already deploy inference-time protections, such as Microsoft Foundry guardrails and controls. Observability complements these protections, enabling fast incident reconstruction, clear impact analysis, and measurable improvement over time. Security teams can then evaluate how systems behave in production and whether controls are working as intended.

Adapting traditional SDL and monitoring practices for non-deterministic systems doesn’t mean reinventing the wheel. In most cases, well-known instrumentation practices can be simply expanded to capture AI-specific signals, establish behavioral baselines, and test for detectability. Standards and platforms such as OpenTelemetry and Azure Monitor can support this shift.

AI observability should be a release requirement. If you cannot reconstruct an agent run or detect trust-boundary violations from logs and traces, the system may not be ready for production.

The post Observability for AI Systems: Strengthening visibility for proactive risk detection appeared first on Microsoft Security Blog.

]]>
Threat modeling AI applications http://approjects.co.za/?big=en-us/security/blog/2026/02/26/threat-modeling-ai-applications/ Thu, 26 Feb 2026 17:04:08 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145401 AI threat modeling helps teams identify misuse, emergent risk, and failure modes in probabilistic and agentic AI systems.

The post Threat modeling AI applications appeared first on Microsoft Security Blog.

]]>
Proactively identifying, assessing, and addressing risk in AI systems

We cannot anticipate every misuse or emergent behavior in AI systems. We can, however, identify what can go wrong, assess how bad it could be, and design systems that help reduce the likelihood or impact of those failure modes. That is the role of threat modeling: a structured way to identify, analyze, and prioritize risks early so teams can prepare for and limit the impact of real‑world failures or adversarial exploits.

Traditional threat modeling evolved around deterministic software: known code paths, predictable inputs and outputs, and relatively stable failure modes. AI systems (especially generative and agentic systems) break many of those assumptions. As a result, threat modeling must be adapted to a fundamentally different risk profile.

Why AI changes threat modeling

Generative AI systems are probabilistic and operate over a highly complex input space. The same input can produce different outputs across executions, and meaning can vary widely based on language, context, and culture. As a result, AI systems require reasoning about ranges of likely behavior, including rare but high‑impact outcomes, rather than a single predictable execution path.

This complexity is amplified by uneven input coverage and resourcing. Models perform differently across languages, dialects, cultural contexts, and modalities, particularly in low‑resourced settings. These gaps make behavior harder to predict and test, and they matter even in the absence of malicious intent. For threat modeling teams, this means reasoning not only about adversarial inputs, but also about where limitations in training data or understanding may surface failures unexpectedly.

Against this backdrop, AI introduces a fundamental shift in how inputs influence system behavior. Traditional software treats untrusted input as data. AI systems treat conversation and instruction as part of a single input stream, where text—including adversarial text—can be interpreted as executable intent. This behavior extends beyond text: multimodal models jointly interpret images and audio as inputs that can influence intent and outcomes.

As AI systems act on this interpreted intent, external inputs can directly influence model behavior, tool use, and downstream actions. This creates new attack surfaces that do not map cleanly to classic threat models, reshaping the AI risk landscape.

Three characteristics drive this shift:

  • Nondeterminism: AI systems require reasoning about ranges of behavior rather than single outcomes, including rare but severe failures.
  • Instruction‑following bias: Models are optimized to be helpful and compliant, making prompt injection, coercion, and manipulation easier when data and instructions are blended by default.
  • System expansion through tools and memory: Agentic systems can invoke APIs, persist state, and trigger workflows autonomously, allowing failures to compound rapidly across components.

Together, these factors introduce familiar risks in unfamiliar forms: prompt injection and indirect prompt injection via external data, misuse of tools, privilege escalation through chaining, silent data exfiltration, and confidently wrong outputs treated as fact.

AI systems also surface human‑centered risks that traditional threat models often overlook, including erosion of trust, overreliance on incorrect outputs, reinforcement of bias, and harm caused by persuasive but wrong responses. Effective AI threat modeling must treat these risks as first‑class concerns, alongside technical and security failures.

Differences in Threat Modeling: Traditional vs. AI Systems
CategoryTraditional SystemsAI Systems
Types of ThreatsFocus on preventing data breaches, malware, and unauthorized access.Includes traditional risks, but also AI-specific risks like adversarial attacks, model theft, and data poisoning.
Data SensitivityFocus on protecting data in storage and transit (confidentiality, integrity).In addition to protecting data, focus on data quality and integrity since flawed data can impact AI decisions.
System BehaviorDeterministic behavior—follows set rules and logic.Adaptive and evolving behavior—AI learns from data, making it less predictable.
Risks of Harmful OutputsRisks are limited to system downtime, unauthorized access, or data corruption.AI can generate harmful content, like biased outputs, misinformation, or even offensive language.
Attack SurfacesFocuses on software, network, and hardware vulnerabilities.Expanded attack surface includes AI models themselves—risk of adversarial inputs, model inversion, and tampering.
Mitigation StrategiesUses encryption, patching, and secure coding practices.Requires traditional methods plus new techniques like adversarial testing, bias detection, and continuous validation.
Transparency and ExplainabilityLogs, audits, and monitoring provide transparency for system decisions.AI often functions like a “black box”—explainability tools are needed to understand and trust AI decisions.
Safety and EthicsSafety concerns are generally limited to system failures or outages.Ethical concerns include harmful AI outputs, safety risks (e.g., self-driving cars), and fairness in AI decisions.

Start with assets, not attacks

Effective threat modeling begins by being explicit about what you are protecting. In AI systems, assets extend well beyond databases and credentials.

Common assets include:

  • User safety, especially when systems generate guidance that may influence actions.
  • User trust in system outputs and behavior.
  • Privacy and security of sensitive user and business data.
  • Integrity of instructions, prompts, and contextual data.
  • Integrity of agent actions and downstream effects.

Teams often under-protect abstract assets like trust or correctness, even though failures here cause the most lasting damage. Being explicit about assets also forces hard questions: What actions should this system never take? Some risks are unacceptable regardless of potential benefit, and threat modeling should surface those boundaries early.

Understand the system you’re actually building

Threat modeling only works when grounded in the system as it truly operates, not the simplified version of design docs.

For AI systems, this means understanding:

  • How users actually interact with the system.
  • How prompts, memory, and context are assembled and transformed.
  • Which external data sources are ingested, and under what trust assumptions.
  • What tools or APIs the system can invoke.
  • Whether actions are reactive or autonomous.
  • Where human approval is required and how it is enforced.

In AI systems, the prompt assembly pipeline is a first-class security boundary. Context retrieval, transformation, persistence, and reuse are where trust assumptions quietly accumulate. Many teams find that AI systems are more likely to fail in the gaps between components — where intent and control are implicit rather than enforced — than at their most obvious boundaries.

Model misuse and accidents 

AI systems are attractive targets because they are flexible and easy to abuse. Threat modeling has always focused on motivated adversaries:

  • Who is the adversary?
  • What are they trying to achieve?
  • How could the system help them (intentionally or not)?

Examples include extracting sensitive data through crafted prompts, coercing agents into misusing tools, triggering high-impact actions via indirect inputs, or manipulating outputs to mislead downstream users.

With AI systems, threat modeling must also account for accidental misuse—failures that emerge without malicious intent but still cause real harm. Common patterns include:

  • Overestimation of Intelligence: Users may assume AI systems are more capable, accurate, or reliable than they are, treating outputs as expert judgment rather than probabilistic responses.
  • Unintended Use: Users may apply AI outputs outside the context they were designed for, or assume safeguards exist where they do not.
  • Overreliance: When users accept incorrect or incomplete AI outputs, typically because AI system design makes it difficult to spot errors.

Every boundary where external data can influence prompts, memory, or actions should be treated as high-risk by default. If a feature cannot be defended without unacceptable stakeholder harm, that is a signal to rethink the feature, not to accept the risk by default.

Use impact to determine priority, and likelihood to shape response

Not all failures are equal. Some are rare but catastrophic; others are frequent but contained. For AI systems operating at a massive scale, even low‑likelihood events can surface in real deployments.

Historically risk management multiplies impact by likelihood to prioritize risks. This doesn’t work for massively scaled systems. A behavior that occurs once in a million interactions may occur thousands of times per day in global deployment. Multiplying high impact by low likelihood often creates false comfort and pressure to dismiss severe risks as “unlikely.” That is a warning sign to look more closely at the threat, not justification to look away from it.

A more useful framing separates prioritization from response:

  • Impact drives priority: High-severity risks demand attention regardless of frequency.
  • Likelihood shapes response: Rare but severe failures may rely on manual escalation and human review; frequent failures require automated, scalable controls.
Figure 1 Impact, Likelihood, and Mitigation by Alyssa Ofstein.

Every identified threat needs an explicit response plan. “Low likelihood” is not a stopping point, especially in probabilistic systems where drift and compounding effects are expected.

Design mitigations into the architecture

AI behavior emerges from interactions between models, data, tools, and users. Effective mitigations must be architectural, designed to constrain failure rather than react to it.

Common architectural mitigations include:

  • Clear separation between system instructions and untrusted content.
  • Explicit marking or encoding of untrusted external data.
  • Least-privilege access to tools and actions.
  • Allow lists for retrieval and external calls.
  • Human-in-the-loop approval for high-risk or irreversible actions.
  • Validation and redaction of outputs before data leaves the system.

These controls assume the model may misunderstand intent. Whereas traditional threat modeling assumes that risks can be 100% mitigated, AI threat modeling focuses on limiting blast radius rather than enforcing perfect behavior. Residual risk for AI systems is not a failure of engineering; it is an expected property of non-determinism. Threat modeling helps teams manage that risk deliberately, through defense in depth and layered controls.

Detection, observability, and response

Threat modeling does not end at prevention. In complex AI systems, some failures are inevitable, and visibility often determines whether incidents are contained or systemic.

Strong observability enables:

  • Detection of misuse or anomalous behavior.
  • Attribution to specific inputs, agents, tools, or data sources.
  • Accountability through traceable, reviewable actions.
  • Learning from real-world behavior rather than assumptions.

In practice, systems need logging of prompts and context, clear attribution of actions, signals when untrusted data influences outputs, and audit trails that support forensic analysis. This observability turns AI behavior from something teams hope is safe into something they can verify, debug, and improve over time.

 Response mechanisms build on this foundation. Some classes of abuse or failure can be handled automatically, such as rate limiting, access revocation, or feature disablement. Others require human judgment, particularly when user impact or safety is involved. What matters most is that response paths are designed intentionally, not improvised under pressure.

Threat modeling as an ongoing discipline

AI threat modeling is not a specialized activity reserved for security teams. It is a shared responsibility across engineering, product, and design.

The most resilient systems are built by teams that treat threat modeling as one part of a continuous design discipline — shaping architecture, constraining ambition, and keeping human impact in view. As AI systems become more autonomous and embedded in real workflows, the cost of getting this wrong increases.

Get started with AI threat modeling by doing three things:

  1. Map where untrusted data enters your system.
  2. Set clear “never do” boundaries.
  3. Design detection and response for failures at scale.

As AI systems and threats change, these practices should be reviewed often, not just once. Thoughtful threat modeling, applied early and revisited often, remains an important tool for building AI systems that better earn and maintain trust over time

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Threat modeling AI applications appeared first on Microsoft Security Blog.

]]>
Detecting and mitigating common agent misconfigurations http://approjects.co.za/?big=en-us/security/blog/2026/02/12/copilot-studio-agent-security-top-10-risks-detect-prevent/ Thu, 12 Feb 2026 20:38:49 +0000 Agents are increasingly powerful. With that power comes risk: small misconfigurations, over‑broad sharing, unauthenticated access, and weak orchestration controls can create real exposure. This article consolidates the most common risks we observe and maps each to practical detections with Microsoft Defender, and mitigations in Copilot Studio.

The post Detecting and mitigating common agent misconfigurations appeared first on Microsoft Security Blog.

]]>
Organizations are rapidly adopting agents, but attackers are equally fast at exploiting misconfigured AI workflows. Mis-sharing, unsafe orchestration, and weak authentication create new identity and data‑access paths that traditional controls don’t monitor. As agents become integrated into operational systems, exposure becomes both easier and more dangerous. Detecting and preventing misconfigurations early is now a core part of AI security posture.

Agents are becoming a core part of business workflows: automating tasks, accessing data, and interacting with systems at scale. That power cuts both ways. In real environments, where organizations turn to low-code solutions to scale agent production, we repeatedly see small, well‑intentioned configuration choices turn into security gaps: agents shared too broadly, exposed without authentication, running risky actions, or operating with excessive privileges. These issues rarely look dangerous- until they are abused. 

If you want to find and stop these risks before they turn into incidents, this post is for you. We break down ten common agent misconfigurations we observe in the wild, showing how to detect them using Microsoft Defender Advanced Hunting via the relevant Community Hunting Queries, and mitigate them using safe configurations with Copilot Studio. 

Short on time? Start with the table below. It gives you a one‑page view of the risks, their impact, and the exact detections that surface them. If something looks familiar, jump straight to the relevant scenario and mitigation.

Each section then dives deeper into a specific risk and recommended mitigations- so you can move from awareness to action, fast.

#Misconfiguration & Risk Security Impact Copilot Studio Mitigating Control(s) Advanced Hunting Community Queries (go to: Security portal>Advanced hunting>Queries> Community Queries>AI Agent folder)  
Agent shared with entire organization or broad groups Unintended access, misuse, expanded attack surface Enable authentication (on by default) Agent sharing limits Automatic security scan warnings at design-time and publish  • AI Agents – Organization or Multitenant Shared 
Agents that do not require authentication Public exposure, unauthorized access, data leakage Enforce agent authentication at the environment level Automatic security scan warnings at design-time and publish  • AI Agents – No Authentication Required 
Agents with HTTP Request actions using risky configurations Governance bypass, insecure communications, unintended API access Apply data policies/advanced connector policies per environment Communicate best practices in Maker welcome content  • AI Agents – HTTP Requests to connector endpoints 
• AI Agents – HTTP Requests to nonHTTPS endpoints 
• AI Agents – HTTP Requests to nonstandard ports 
Agents capable of emailbased data exfiltration Data exfiltration via prompt injection or misconfiguration Implement Microsoft Defender Real-time Protection Disable specific actions with connector action control  • AI Agents – Sending email to AIcontrolled input values 
• AI Agents – Sending email to external mailboxes 
Dormant connections, actions, or agents Hidden attack surface, stale privileged access Review agents in Inventory • AI Agents – Published Dormant (30d) 
• AI Agents – Unpublished Unmodified (30d) 
• AI Agents – Unused Actions 
• AI Agents – Dormant Author Authentication Connection 
Agents using author (maker) authentication Privilege escalation, separation of duties bypassofduties bypass Restrict maker-provided credentials Automatic security scan warnings at design-time and publish • AI Agents – Published Agents with Author Authentication 
• AI Agents – MCP Tool with Maker Credentials 
Agents containing hardcoded credentials Credential leakage, unauthorized system access Communicate best practices in Maker welcome content Store secrets in Azure Key Vault, referencing as environment variables Managed environments: Sharing LimitsDeployment Pipelines with gated extensions  AI Agents – Hardcoded Credentials in Topics or Actions 
Agents with Model Context Protocol (MCP) tools configured Undocumented access paths, unintended system interactions Manage available MCP tooling with Data Policies and Advanced Connector Policies    AI Agents – MCP Tool Configured 
Agents with generative orchestration lacking instructions Prompt abuse, behavior drift, unintended actions Communicate Best Practices for agent instructions Azure Prompt Shield & RAI guardrails (on by default) Implement Microsoft Defender Real-time Protection  AI Agents – Published Generative Orchestration without Instructions 
10 Orphaned agents (no active owner) Lack of governance, outdated logic, unmanaged access Check inventory for stale agents Quarantine agents to be decommissioned AI Agents – Orphaned Agents with Disabled Owners 

Top 10 risks you can detect and prevent

Imagine this scenario: A help desk agent is created in your organization with simple instructions.  

The maker, someone from the support team, connects it to an organizational database using an MCP tool, so it can pull relevant customer information from internal tables and provide better answers. So far, so good. 

Then the maker decides, on their own, that the agent doesn’t need authentication. After all, it’s “only” shared internally, and the data belongs to employees anyway (See example in Figure 1)). That might already sound suspicious to you. But it doesn’t to everyone. 

You might be surprised how often agents like this exist in real environments—and how rarely security teams get an active signal when they’re created. No alert. No review. Just another “helpful” agent quietly going live. 

Now here’s the question: 
Out of the 10 risks described in this article, how many do you think are already present in this simple agent?  

The answer comes at the end of the blog. 

Figure 1 – Example Help Desk agent.

Scenario 1: Agent shared with the entire organization or broad groups 

Sharing an agent with your entire organization or broad security groups exposes its capabilities without proper access boundaries. While convenient, this practice expands the attack surface. Users unfamiliar with the agent’s purpose might unintentionally trigger sensitive actions, and threat actors with minimal access could use the agent as an entry point. 

In many organizations, this risk occurs because broad sharing is fast and easy, often lacking controls to ensure only the right users have access. This results in agents being visible to everyone, including users with unrelated roles or inappropriate permissions. This visibility increases the risk of data exposure, misuse, and unintended activation of sensitive connectors or actions. 

Mitigation 

Copilot Studio agents are restricted to a Power Platform environment, which enforces data, security, and governance policies. Properly managing environments helps prevent risks like oversharing and controls who can create or deploy agents, making it important to consider an environment strategy for safe agent deployment. 

With Managed Environments, administrators can limit sharing by setting how broadly Copilot Studio agents can be shared within an environment or environment group, including specifying numerical limits on recipients. 

Scenario 2: Agents That Do Not Require Authentication 

Agents that you can access without authentication, or that only prompt for authentication on demand, create a significant exposure point. When an agent is publicly reachable or unauthenticated, anyone with the link can use its capabilities. Even if the agent appears harmless, its topics, actions, or knowledge sources might unintentionally reveal internal information or allow interactions that were never for public access. 

This gap appears because authentication was deactivated for testing, left in its default state, or misunderstood as optional. The results in an agent that behaves like a public entry point into organizational data or logic. Without proper controls, this creates a risk of data leakage, unintended actions, and misuse by external or anonymous users. 

Mitigation 

In Copilot Studio, recommended default settings direct makers to utilize Microsoft Entra ID for authentication purposes. If a maker chooses the ‘No authentication’ option, they will be presented with a warning both during design and at publishing, ensuring that such a configuration is intentional. 

To further reduce the risk of misconfigurations, administrators have the ability to prevent unauthenticated access by implementing data policies at scale, ranging from individual environments to tenant-wide enforcement. 

Scenario 3: Agents with HTTP Request action with risky configurations 

Agents that perform direct HTTP requests introduce a unique risk, especially when those requests target non-standard ports, insecure schemes, or sensitive services that already have pre-built connectors. These patterns often bypass the governance, validation, throttling, and identity controls that connectors provide. As a result, they can expose the organization to misconfigurations, information disclosure, or unintended privilege escalation. 

These configurations appear unintentionally. A maker might copy a sample request, test an internal endpoint, or use HTTP actions for flexibility during testing and convenience. Without proper review, this can lead to agents issuing unsecured calls over HTTP or invoking critical Microsoft APIs directly through URLs instead of secured connectors. Each of these behaviors represents an opportunity for misuse or accidental exposure of organizational data. 

Mitigation 

Providing agents with tools increases risk, so makers should be educated on safe setup. In Copilot Studio, Maker welcome content easily informs makers of best practices and policies. 

Whenever possible, use Microsoft-published secured connectors instead of HTTP Request actions to access APIs and services. Admins can enforce this by applying a data policy that limits risky connectors at the environment level. 

Combining data policies with environments helps contain high-risk tools like HTTP Request to development stages, while restricting them in production where agents are published and shared. 

Scenario 4: Agents Capable of Email-Based Data Exfiltration 

Agents that send emails using dynamic or externally controlled inputs present a significant risk. When an agent uses generative orchestration to send email, the orchestrator determines the recipient and message content at runtime. In a successful indirect prompt injection attack, a threat actor could instruct the agent to send internal data to external recipients. 

A similar risk exists when an agent is explicitly configured to send emails to external domains. Even for legitimate business scenarios, unaudited outbound email can allow sensitive information to leave the organization. Because email is an immediate outbound channel, any misconfiguration can lead to unmonitored data exposure. 

Many organizations create this gap unintentionally. Makers often use email actions for testing, notifications, or workflow automation without restricting recipient fields. Without safeguards, these agents can become exfiltration channels for any user who triggers them or for a threat actor exploiting generative orchestration paths. 

Mitigation 

To safeguard sensitive data within the organization, Copilot Studio may be configured for policy enforcement, runtime protection, and human oversight. 

Addressing the risk of prompt injection attacks, Copilot Studio can leverage both native and externally integrated (Defender) runtime protections. These mechanisms actively monitor for prompt injection threats and deviations from intended objectives, thereby establishing a comprehensive, layered defense at runtime. 

Given that outbound channels such as email allow for rapid data exfiltration, administrators are advised to implement data policies to regulate the use of email connectors by agents. They may also restrict permitted actions through connector action control. Ultimately, human‑in‑the‑loop approvals ensure user oversight of high-risk operations, enabling agents to provide assistance while maintaining user control over critical decisions. 

Scenario 5: Dormant Connections, Actions, or Agents Within the Organization 

Dormant agents and unused components might seem harmless, but they can create significant organizational risk. Unmonitored entry points often lack active ownership. These include agents that haven’t been invoked for weeks, unpublished drafts, or actions using Maker authentication. When these elements stay in your environment without oversight, they might contain outdated logic or sensitive connections That don’t meet current security standards. 

Dormant assets are especially risky because they often fall outside normal operational visibility. While teams focus on active agents, older configurations are easily forgotten. Threat actors, frequently target exactly these blind spots. For example: 

  • A published but unused agent can still be called.  
  • A dormant maker-authenticated action might trigger elevated operations. 
  • Unused actions in classic orchestration can expose sensitive connectors if they are activated.  

Without proper governance, these artifacts can expose sensitive connectors if they are activated. 

Mitigation 

To address the risks posed by unused agents, administrators should regularly monitor the Power Platform Inventory for agents with minimal activity. This inventory provides visibility of agents across all environments, identifying those that haven’t been actively used. 

These agents can then be transitioned into a deprecation phase, where their availability is limited and their impact is assessed. Maintaining a dedicated list of agents flagged for deprecation helps prevent unnecessary proliferation and signals organizational intent to retire them, ensuring unused agents don’t become unmanaged risks. Learn how agents can be quarantined during review. 

Scenario 6: Agents Using Author Authentication 

When agents use the maker’s personal authentication, they act on behalf of the creator rather than the end user.  In this configuration, every user of the agent inherits the maker’s permissions. If those permissions include access to sensitive data, privileged operations, or high impact connectors, the agent becomes a path for privilege escalation. 

This exposure often happens unintentionally. Makers might allow author authentication for convenience during development or testing because it is the default setting of certain tools. However, once published, the agent continues to run with elevated permissions even when invoked by regular users. In more severe cases, Model Context Protocol (MCP) tools configured with maker credentials allow threat actors to trigger operations that rely directly on the creator’s identity. 

Author authentication weakens separation of duties and bypasses the principle of least privilege. It also increases the risk of credential misuse, unauthorized data access, and unintended lateral movement 

Mitigation 

In Copilot Studio, tool authentication is configured by default to use end user credentials, prioritizing security and limiting privilege escalation risks. When a maker attempts to change the authentication method to ‘Maker-provided credentials’, the automatic security scan triggers a warning to ensure that this change is made deliberately and with awareness of the associated risks. This safeguard helps makers avoid unintentionally enabling elevated permissions for agents. 

Scenario 7: Agents Containing Hard-Coded Credentials 

Beyond the immediate leakage risk, hard-coded credentials bypass the standard enterprise controls normally applied to secure secret storage. They are not rotated, not governed by Key Vault policies, and not protected by environment variable isolation. As a result, even basic visibility into agent definitions may expose valuable secrets. 

Agents that contain hard-coded credentials inside topics or actions introduce a severe security risk. Clear-text secrets embedded directly in agent logic can be read, copied, or extracted by unintended users or automated systems. This often occurs when makers paste API keys, authentication tokens, or connection strings during development or debugging, and the values remain embedded in the production configuration. Such credentials can expose access to external services, internal systems, or sensitive APIs, enabling unauthorized access or lateral movement. 

Mitigation 

Copilot Studio agents should never store secrets directly. Instead, credentials should be externalized through managed connections, agent flows, and environment variables backed by Azure Key Vault, so that secrets are securely resolved at runtime and never exposed in agent definitions.  

Agents should enforce Microsoft Entra ID authentication, avoid author‑run contexts in production, and operate with least‑privilege access. Mechanisms such as Power Platform pipelinesgated promotions, and audit logging, help prevent secrets from being introduced during development and ensure secure practices are consistently applied from development through production. 

Scenario 8: Agents With Model Context Protocol (MCP) Tools Configured 

AI agents that include Model Context Protocol (MCP) tools provide a powerful way to integrate with external systems or run custom logic. However, if these MCP tools aren’t actively maintained or reviewed, they can introduce undocumented access patterns into the environment.  

This risk when MCP configurations are: 

  • Activated by default 
  • Copied between agents 
  • Left active after the original integration is no longer needed 

Unmonitored MCP tools might expose capabilities that exceed the agent’s intended purpose. This is especially true if they allow access to privileged operations or sensitive data sources. Without regular oversight, these tools can become hidden entry points that user or threat actors might trigger unintended system interactions. 

Mitigation 

Copilot Studio enforces secure integration of Model Context Protocol (MCP) tools by onboarding MCP servers as connectors, ensuring all connections are only accessible if approved by IT. Administrators should restrict MCP access to managed environments, implement data policiesadvanced connector policies, and regularly review the Power Platform Inventory to identify agents with dormant or risky integrations to minimize the risk of undocumented access paths and privilege escalation. 

By following these practices, organizations ensure that MCP tools are consistently managed and aligned with least-privilege principles, preventing unintended system interaction or access. 

Scenario 9: Agents With Generative Orchestration Lacking Instructions 

AI agents that use generative orchestration without defined instructions face a high risk of unintended behavior. Instructions are the primary way to align a generative model with its intended purpose. If instructions are missing, incomplete, or misconfigured, the orchestrator lacks the context needed to limit its output. This makes the agent more vulnerable to user influence from user inputs or hostile prompts. 

A lack of guidance can cause an agent to; 

  • Drift from its expected behaviors. The agent might not follow its intended logic. 
  • Use unexpected reasoning. The model might follow logic paths that don’t align with business needs. 
  • Interact with connected systems in unintended ways. The agent might trigger actions that were never planned.  

For organizations that need predictable and safe behavior, behavior, missing instructions area significant configuration gap. 

Mitigation  

Copilot Studio provides built-in protections to reduce risks like prompt injection and goal manipulation when using generative orchestration. At the platform layer, it scans user input and context for common injection attempts and goal deviations. 

External runtime defenses, including Microsoft Defender integration, add a second layer by inspecting agent actions during execution, blocking misuse before actions are completed. 

Despite these controls, clear maker guidance remains vital. Generative orchestration relies on intentional instructions, and training makers on best practices is essential to ensure safe, predictable results. 

10: Orphaned agents

Orphaned agents are agents whose owners are no longer with organization or their accounts deactivated. Without a valid owner, no one is responsible for oversight, maintenance, updates, or lifecycle management. These agents might continue to run, interact with users, or access data without an accountable individual ensuring the configuration remains secure.

Because ownerless agents bypass standard review cycles, they often contain outdated logic, deprecated connections, or sensitive access patterns that don’t align with current organizational requirements.

Mitigation 

With Copilot Studio, Administrators can leverage the Power Platform Inventory to gain usage visibility and continuously identify agents that lack an accountable owner or are no longer actively used. The Inventory provides a near‑real‑time view of all agents across environments, including ownership metadata and usage signals, enabling administrators to surface agents with missing owners or consistently low or no activity.  

By reviewing this information alongside usage and adoption reporting, administrators can distinguish between actively governed solutions and orphaned or abandoned agents that no longer serve a business purpose. 

Once identified, orphaned or unused agents can be placed into a deprecation state as part of governance operations. Learn how agents can be quarantined for review.  

Maintaining a deprecation list informed by inventory and usage insights allows organizations to signal intent, remove broad availability, and prevent further sprawl while assessing risk and impact before reassignment to a new owner or deletion. 


Remember the help desk agent we started with? That simple agent setup quietly checked off more than half of the risks in this list.

Keep reading and running the Advanced Hunting queries in the AI Agents folder, to find agents carrying these risks in your own environment before it’s too late.

Figure 2: The example Help Desk agent was detected by a query for unauthenticated agents.

From Findings to Fixes: A Practical Mitigation Playbook for Agents (Beyond Copilot Studio)

The ten risks described above manifest in different ways, but they consistently stem from a small set of underlying security gaps: overexposure, weak authentication boundaries, unsafe orchestration, and missing lifecycle governance. 

Figure 3 – Underlying security gaps.

Damage doesn’t begin with the attack. It starts when risks are left untreated. The section below is a practical checklist of validations and actions that help close common agent security gaps before they’re exploited. Read it once, apply it consistently, and save yourself the cost of cleaning up later. Fixing security debt is always more expensive than preventing it.

1. Verify Intent and Ownership 

Before changing configurations, confirm whether the agent’s behavior is intentional and still aligned with business needs. 

  • Validate the business justification for broad sharing, public access, external communication, or elevated permissions with the agent owner.  
  • Confirm whether agents without authentication are explicitly designed for public use and whether this aligns with organizational policy. 
  • Review agent topics, actions, and knowledge sources to ensure no internal, sensitive, or proprietary information is exposed unintentionally. 
  • Ensure that every agent has an active, accountable owner. Reassign ownership for orphaned agents or retire agents that no longer have a clear purpose.  
  • Validate whether dormant agents, connections, or actions are still required, and decommission those that are not. 
  • Perform periodic reviews for agents and establish a clear organizational policy for agents’ creation. 

2. Reduce Exposure and Tighten Access Boundaries 

Most agent risks are amplified by unnecessary exposure. Reducing who can reach the agent, and what it can reach, significantly lowers risk.  

  • Restrict agent sharing to well-scoped, role-based security groups instead of entire organizations or broad groups. 
  • Establish and enforce organizational policies defining when broad sharing or public access is allowed and what approvals are required. 
  • Enforce full authentication by default. Only allow unauthenticated access to agents when explicitly required and approved.  
  • Limit outbound communication paths:  
  • Restrict email actions to approved domains or hardcoded recipients. 
  • Avoid AI-controlled dynamic inputs for sensitive outbound actions such as email or HTTP requests. 
  • Perform periodic reviews of shared agents to ensure visibility and access remain appropriate over time. 

3. Enforce Strong Authentication and Least Privilege 

Agents must not inherit more privilege than necessary, especially through development shortcuts. 

  • Replace author authentication with user-based or system-based authentication wherever possible.  
  • Review all tools and abilities that run using delegated access and reconfigure those that expose sensitive or high-impact services. 
  • Audit MCP tools that rely on creator credentials and remove or update them if they are no longer required. 
  • Apply the principle of least privilege to all connectors, actions, and data access paths, even when broad sharing is justified. 

4. Harden Orchestration and Dynamic Behavior 

Generative agents require explicit guardrails to prevent unintended or unsafe behavior. 

  • Ensure clear, well-structured instructions are configured for generative orchestration to define the agent’s purpose, constraints, and expected behavior.  
  • Avoid allowing the model to dynamically decide:  
  • Email recipients 
  • External endpoints 
  • Execution logic for sensitive actions 
  • Review HTTP Request actions carefully:  
  • Confirm endpoint, scheme, and port are required for the intended use case. 
  • Prefer official integrations over raw HTTP requests to benefit from authentication, governance, logging, and policy enforcement. 
  • Enforce HTTPS and avoid nonstandard ports unless explicitly approved. 

5. Eliminate Dead Weight and Protect Secrets 

Unused capabilities and embedded secrets quietly expand the attack surface. 

  • Remove or deactivate:  
  • Dormant agents 
  • Unpublished or unmodified agents 
  • Unused actions 
  • Stale connections 
  • Outdated or unnecessary MCP tool configurations 
  • Clean up actions that use author authentication and classic orchestration actions that are no longer referenced. 
  • Externalize all secrets to secure storage and reference them via environment variables instead of embedding them in agent logic. 

Treat agents as production assets, not experiments, and include them in regular lifecycle and governance reviews. 

Effective posture management is essential for maintaining a secure and predictable agent production environment. As agents grow in capability and integrate with increasingly sensitive systems, organizations must adopt structured governance practices that identify risks early and enforce consistent configuration standards.  

The scenarios and detection rules presented in this blog provide a foundation to help you; 

  • Discovering common security gaps 
  • Strengthening oversight  
  • Reduce the overall attack surface  

By combining automated detection with clear operational policies, you can ensure that agents remain secure, aligned, and resilient. 

This research is provided by Microsoft Defender Security Research with contributions from Dor Edry, Uri Oren, and the Copilot Studio team.

Learn more

The post Detecting and mitigating common agent misconfigurations appeared first on Microsoft Security Blog.

]]>
New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data http://approjects.co.za/?big=en-us/security/blog/2026/01/29/new-microsoft-data-security-index-report-explores-secure-ai-adoption-to-protect-sensitive-data/ Thu, 29 Jan 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=144879 The 2026 Microsoft Data Security Index explores one of the most pressing questions facing organizations today: How can we harness the power of generative while safeguarding sensitive data?

The post New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data appeared first on Microsoft Security Blog.

]]>
Generative AI and agentic AI are redefining how organizations innovate and operate, unlocking new levels of productivity, creativity and collaboration across industry teams. From accelerating content creation to streamlining workflows, AI offers transformative benefits that empower organizations to work smarter and faster. These capabilities, however, also introduce new dimensions of data risk—as AI adoption grows, so does the urgency for effective data security that keeps pace with AI innovation. In the 2026 Microsoft Data Security Index report, we explored one of the most pressing questions facing today’s organizations: How can we harness the power of AI while safeguarding sensitive data?

47% of surveyed organizations are​ implementing controls focused on generative AI workloads

To fully realize the potential of AI, organizations must pair innovation with responsibility and robust data security. This year, the Data Security Index report builds upon the responses of more than 1,700 security leaders to highlight three critical priorities for protecting organizational data and securing AI adoption:

  1. Moving from fragmented tools to unified data security.
  2. Managing AI-powered productivity securely.
  3. Strengthening data security with generative AI itself.

By consolidating solutions for better visibility and governance controls, implementing robust controls processes to protect data in AI-powered workflows, and using generative AI agents and automation to enhance security programs, organizations can build a resilient foundation for their next wave of generative AI-powered productivity and innovation. The result is a future where AI both drives efficiency and acts as a powerful ally in defending against data risk, unlocking growth without compromising protection.

In this article we will delve into some of the Data Security Index report’s key findings that relate to generative AI and how they are being operationalized at Microsoft. The report itself has a much broader focus and depth of insight.

1. From fragmented tools to unified data security

Many organizations still rely on disjointed tools and siloed controls, creating blind spots that hinder the efficacy of security teams. According to the 2026 Data Security Index, decision-makers cite poor integration, lack of a unified view across environments, and disparate dashboards as their top challenges in maintaining proper visibility and governance. These gaps make it harder to connect insights and respond quickly to risks—especially as data volumes and data environment complexity surge. Security leaders simply aren’t getting the oversight they need.

Why it matters
Consolidating tools into integrated platforms improves visibility, governance, and proactive risk management.

To address these challenges, organizations are consolidating tools, investing in unified platforms like Microsoft Purview that bring operations together while improving holistic visibility and control. These integrated solutions frequently outperform fragmented toolsets, enabling better detection and response, streamlined management, and stronger governance.

As organizations adopt new AI-powered technologies, many are also leaning into emerging disciplines like Microsoft Purview Data Security Posture Management (DSPM) to keep pace with evolving risks. Effective DSPM programs help teams identify and prioritize data‑exposure risks, detect access to sensitive information, and enforce consistent controls while reducing complexity through unified visibility. When DSPM provides proactive, continuous oversight, it becomes a critical safeguard—especially as AI‑powered data flows grow more dynamic across core operations.

More than 80% of surveyed organizations are implementing or developing DSPM strategies

We’re trying to use fewer vendors. If we need 15 tools, we’d rather not manage 15 vendor solutions. We’d prefer to get that down to five, with each vendor handling three tools.”

—Global information security director in the hospitality and travel industry

2. Managing AI-powered productivity securely

Generative AI is already influencing data security incident patterns: 32% of surveyed organizations’ data security incidents involve the use of generative AI tools. Understandably, surveyed security leaders have responded to this trend rapidly. Nearly half (47%) the security leaders surveyed in the 2026 Data Security Index are implementing generative AI-specific controls—an increase of 8% since the 2025 report. This helps enable innovation through the confident adoption of generative AI apps and agents while maintaining security.

A banner chart that says "32% of surveyed organizations' data security incidents involve use of AI tools."

Why it matters
Generative AI boosts productivity and innovation, but both unsanctioned and sanctioned AI tools must be managed. It’s essential to control tool use and monitor how data is accessed and shared with AI.

In the full report, we explore more deeply how AI-powered productivity is changing the risk profile of enterprises. We also explore several mechanisms, both technical and cultural, already helping maintain trust and reduce risk without sacrificing productivity gains or compliance.

3. Strengthening data security with generative AI

The 2026 Data Security Index indicates that 82% of organizations have developed plans to embed generative AI into their data security operations, up from 64% the previous year. From discovering sensitive data and detecting critical risks to investigating and triaging incidents, as well as refining policies, generative AI is being deployed for both proactive and reactive use cases at scale. The report explores how AI is changing the day-to-day operations across security teams, including the emergence of AI-assisted automation and agents.

alt text

Why it matters
Generative AI automates risk detection, scales protection, and accelerates response—amplifying human expertise while maintaining oversight.

Our generative AI systems are constantly observing, learning, and making recommendations for modifications with far more data than would be possible with any kind of manual or quasi-manual process.”

—Director of IT in the energy industry

Turning recommendations into action

As organizations confront the challenges of data security in the age of AI, the 2026 Data Security Index report offers three clear imperatives: unifying data security, increasing generative AI oversight, and using AI solutions to improve data security effectiveness.

  1. Unified data security requires continuous oversight and coordinated enforcement across your data estate. Achieving this scenario demands mechanisms that can discover, classify, and protect sensitive information at scale while extending safeguards to endpoints and workloads. Microsoft Purview DSPM operationalizes this principle through continuous discovery, classification, and protection of sensitive data across cloud, software as a service (SaaS), and on-premises assets.
  2. Responsible AI adoption depends on strict (but dynamic) controls and proactive data risk management. Organizations must enforce automated mechanisms that prevent unauthorized data exposure, monitor for anomalous usage, and guide employees toward sanctioned tools and responsible practices. Microsoft enforces these principles through governance policies supported by Microsoft Purview Data Loss Prevention and Microsoft Defender for Cloud Apps. These solutions detect, prevent, and respond to risky generative AI behaviors that increase the likelihood of data exposure, policy violations, or unsafe outputs, ensuring innovation aligns with security and compliance requirements.
  3. Modern security operations benefit from automation that accelerate detection and response alongside strong oversight. AI-powered agents can streamline threat investigation, recommend policies, and reduce manual workload while maintaining human oversight for accountability. We deliver this capability through Microsoft Security Copilot, embedded across Microsoft Sentinel, Microsoft Entra, Microsoft Intune, Microsoft Purview, and Microsoft Defender. These agents automate threat detection, incident investigation, and policy recommendations, enabling faster response and continuous improvement of security posture.

Stay informed, stay productive, stay protected

The insights we’ve covered here only scratch the surface of what the Microsoft Data Security Index reveals.The full report dives deeper into global trends, detailed metrics, and real-world perspectives from security leaders across industries and the globe. It provides specificity and context to help you shape your generative AI strategy with confidence.

If you want to explore the data behind these findings, see how priorities vary by region, and uncover actionable recommendations for secure AI adoption, read the full 2026 Microsoft Data Security Index to access comprehensive research, expert commentary, and practical guidance for building a security-first foundation for innovation.

Learn more

Learn more about the Microsoft Purview unified data security solutions.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data appeared first on Microsoft Security Blog.

]]>
Microsoft Security success stories: Why integrated security is the foundation of AI transformation http://approjects.co.za/?big=en-us/security/blog/2026/01/22/microsoft-security-success-stories-why-integrated-security-is-the-foundation-of-ai-transformation/ Thu, 22 Jan 2026 17:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=144835 Discover how Ford, Icertis, and TriNet modernized security with Microsoft—embedding Zero Trust, automating defenses, and enabling secure AI innovation at scale.

The post Microsoft Security success stories: Why integrated security is the foundation of AI transformation appeared first on Microsoft Security Blog.

]]>
AI is transforming how organizations operate and how they approach security. In this new era of agentic AI, every interaction, digital or human, must be built on trust. As businesses modernize, they’re not just adopting AI tools, they’re rearchitecting their digital foundations. And that means security can’t be an afterthought. It must be woven in from the beginning into every layer of the stack—ubiquitous, ambient, and autonomous—just like the AI it protects. 

In this blog, we spotlight three global organizations that are leading the way. Each is taking a proactive, platform-first approach to security—moving beyond fragmented defenses and embedding protection across identity, data, devices, and cloud infrastructure. Their stories show that when security is deeply integrated from the start, it becomes a strategic enabler of resilience, agility, and innovation. And by choosing Microsoft Security, these customers are securing the foundation of their AI transformation from end to end.

Why security transformation matters to decision makers

Security is a board-level priority. The following customer stories show how strategic investments in security platforms can drive cost savings, operational efficiency, and business agility, not just risk reduction. Read on to learn how Ford, Icertis, and TriNet transformed their operations with support from Microsoft.

Ford builds trust across global operations

In the automotive industry, a single cyberattack can ripple across numerous aspects of the business. Ford recognized that rising ransomware and targeted cyberattacks demanded a different approach. The company made a deliberate shift away from fragmented, custom-built security tools toward a unified Microsoft security platform, adopting a Zero Trust approach and prioritizing security embedded into every layer of its hybrid environment—from endpoints to data centers and cloud infrastructure.

Unified protection and measurable impact

Partnering with Microsoft, Ford deployed Microsoft Defender, Microsoft Sentinel, Microsoft Purview, and Microsoft Entra to strengthen defenses, centralize threat detection, and enforce data governance. AI-powered telemetry and automation improved visibility and accelerated incident response, while compliance certifications supported global scaling. By building a security-first culture and leveraging Microsoft’s integrated stack, Ford reduced vulnerabilities, simplified operations, and positioned itself for secure growth across markets.

Read the full customer story to discover more about Ford’s security modernization collaboration with Microsoft.

Icertis cuts security operations center (SOC) incidents by 50%

As a global leader in contract intelligence, Icertis introduced generative AI to transform enterprise contracting, launching applications built on Microsoft Azure OpenAI and its Vera platform. These innovations brought new security challenges, including prompt injection risks and compliance demands across more than 300 Azure subscriptions. To address these, Icertis adopted Microsoft Defender for Cloud for AI posture management, threat detection, and regulatory alignment, ensuring sensitive contract data remains protected.

Driving security efficiency and resilience

By integrating Microsoft Security solutions—Defender for Cloud, Microsoft Sentinel, Purview, Entra, and Microsoft Security Copilot—Icertis strengthened governance and accelerated incident response. AI-powered automation reduced alert triage time by up to 80%, cut mean time to resolution to 25 minutes, and lowered incident volume by 50%. With Zero Trust principles and embedded security practices, Icertis scales innovation securely while maintaining compliance, setting a new standard for trust in AI-powered contracting.

Read the full customer story to learn how Icertis secures sensitive contract data, accelerates AI innovation, and achieves measurable risk reduction with Microsoft’s unified security platform.

TriNet moves to Microsoft 365 E5, achieves annual savings in security spend

Facing growing complexity from multiple point solutions, TriNet sought to reduce operational overhead and strengthen its security posture. The company’s leadership recognized that consolidating tools could improve visibility, reduce risk, and align security with its broader digital strategy. After evaluating providers, TriNet chose Microsoft 365 E5 for its integrated security platform, delivering advanced threat protection, identity management, and compliance capabilities.

Streamlined operations and improved efficiencies

By adopting Microsoft Defender XDR, Purview, Entra, Microsoft Sentinel, and Microsoft 365 Copilot, TriNet unified security across endpoints, cloud apps, and data governance. Automation and centralized monitoring reduced alert fatigue, accelerated incident response, and improved Secure Score. The platform blocked a spear phishing attempt targeting executives, demonstrating the value of Zero Trust and advanced safeguards. With cost savings from tool consolidation and improved efficiency, TriNet is building a secure foundation for future innovation.

Read the full customer story to see how TriNet consolidated its security stack with Microsoft 365 E5, reduced complexity, and strengthened defenses against advanced threats.

How to plan, adopt, and operationalize a Microsoft Security strategy 

Ford, Icertis, and TriNet each began their transformation by assessing legacy systems and identifying gaps that created complexity and risk. Ford faced fragmented tools across a global manufacturing footprint, Icertis needed to secure sensitive contract data while adopting generative AI, and TriNet aimed to reduce operational complexity caused by managing multiple point solutions, seeking a more streamlined and integrated approach. These assessments revealed the need for a unified, risk-based strategy to simplify operations and strengthen protection.

Building on Zero Trust and deploying integrated solutions

All three organizations aligned on Zero Trust principles as the foundation for modernization. They consolidated security into Microsoft’s integrated platform, deploying Defender for endpoint and cloud protection, Microsoft Sentinel for centralized monitoring, Purview for data governance, Entra for identity management, and Security Copilot for AI-powered insights. This phased rollout allowed each company to embed security into daily operations while reducing manual processes and improving visibility.

Measuring impact and sharing best practices

The results were tangible: Ford accelerated threat detection and governance across its hybrid environment, Icertis cut incident volume by 50% and reduced triage time by 80%, and TriNet improved Secure Score while achieving cost savings through tool consolidation. Automation and AI-powered workflows delivered faster response times and reduced complexity. Each organization now shares learnings internally and with industry peers—whether through executive briefings, training programs, or participation in cybersecurity forums—helping set new standards for resilience and innovation.

Working towards a more secure future

The future of enterprise security is being redefined by AI, by innovation, and by the bold choices organizations make today. Modernization, automation, and collaboration are no longer optional—they’re foundational. As AI reshapes how we work, build, and protect, security must evolve in lockstep: not as an add-on, but as a fabric woven through every layer of the enterprise. 

These customer stories show us that building a security-first approach isn’t just possible; it’s imperative. From cloud-native disruptors to global institutions modernizing complex environments, leading organizations are showing what’s possible when security and AI move together. By unifying their tools, automating what once was manual, and using AI to stay ahead of emerging cyberthreats, they’re not just protecting today, they’re securing the future and shaping what comes next. 

Share your thoughts

Are you a regular user of Microsoft Security products? Share your insights and experiences on Gartner Peer Insights™.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Microsoft Security success stories: Why integrated security is the foundation of AI transformation appeared first on Microsoft Security Blog.

]]>
Four priorities for AI-powered identity and network access security in 2026 http://approjects.co.za/?big=en-us/security/blog/2026/01/20/four-priorities-for-ai-powered-identity-and-network-access-security-in-2026/ Tue, 20 Jan 2026 17:00:00 +0000 Discover four key identity and access priorities for the new year to strengthen your organization's identity security baseline.

The post Four priorities for AI-powered identity and network access security in 2026 appeared first on Microsoft Security Blog.

]]>
No doubt, your organization has been hard at work over the past several years implementing industry best practices, including a Zero Trust architecture. But even so, the cybersecurity race only continues to intensify.

AI has quickly become a powerful tool misused by threat actors, who use it to slip into the tiniest crack in your defenses. They use AI to automate and launch password attacks and phishing attempts at scale, craft emails that seem to come from people you know, manufacture voicemails and videos that impersonate people, join calls, request IT support, and reset passwords. They even use AI to rewrite AI agents on the fly as they compromise and traverse your network.

To stay ahead in the coming year, we recommend four priorities for identity security leaders:

  1. Implement fast, adaptive, and relentless AI-powered protection.
  2. Manage, govern, and protect AI and agents.
  3. Extend Zero Trust principles everywhere with an integrated Access Fabric security solution.
  4. Strengthen your identity and access foundation to start secure and stay secure.

Secure Access Webinar

Enhance your security strategy: Deep dive into how to unify identity and network access through practical Zero Trust measures in our comprehensive four-part series.

A man uses multifactor authentication.

1. Implement fast, adaptive, and relentless AI-powered protection

2026 is the year to integrate AI agents into your workflows to reduce risk, accelerate decisions, and strengthen your defenses.

While security systems generate plenty of signals, the work of turning that data into clear next steps is still too manual and error-prone. Investigations, policy tuning, and response actions require stitching together an overwhelming volume of context from multiple tools, often under pressure. When cyberattackers are operating at the speed and scale of AI, human-only workflows constrain defenders.

That’s where generative AI and agentic AI come in. Instead of reacting to incidents after the fact, AI agents help your identity teams proactively design, refine, and govern access. Which policies should you create? How do you keep them current? Agents work alongside you to identify policy gaps, recommend smarter and more consistent controls, and continuously improve coverage without adding friction for your users. You can interact with these agents the same way you’d talk to a colleague. They can help you analyze sign-in patterns, existing policies, and identity posture to understand what policies you need, why they matter, and how to improve them.

In a recent study, identity admins using the Conditional Access Optimization Agent in Microsoft Entra completed Conditional Access tasks 43% faster and 48% more accurately across tested scenarios. These gains directly translate into a stronger identity security posture with fewer gaps for cyberattackers to exploit. Microsoft Entra also includes built-in AI agents for reasoning over users, apps, sign-ins, risks, and configurations in context. They can help you investigate anomalies, summarize risky behavior, review sign-in changes, remediate and investigate risks, and refine access policies.

The real advantage of AI-powered protection is speed, scale, and adaptability. Static, human-only workflows just can’t keep up with constantly evolving cyberattacks. Working side-by-side with AI agents, your teams can continuously assess posture, strengthen access controls, and respond to emerging risks before they turn into compromise.

Where to learn more: Get started with Microsoft Security Copilot agents in Microsoft Entra to help your team with everyday tasks and the complex scenarios that matter most.

2. Manage, govern, and protect AI and agents 

Another critical shift is to make every AI agent a first-class identity and govern it with the same rigor as human identities. This means inventorying agents, assigning clear ownership, governing what they can access, and applying consistent security standards across all identities.

Just as unsanctioned software as a service (SaaS) apps once created shadow IT and data leakage risks, organizations now face agent sprawl—an exploding number of AI systems that can access data, call external services, and act autonomously. While you want your employees to get the most out of these powerful and convenient productivity tools, you also want to protect them from new risks.

Fortunately, the same Zero Trust principles that apply to human employees apply to AI agents, and now you can use the same tools to manage both. You can also add more advanced controls: monitoring agent interaction with external services, enforcing guardrails around internet access, and preventing sensitive data from flowing into unauthorized AI or SaaS applications.

With Microsoft Entra Agent ID, you can register and manage agents using familiar Entra experiences. Each agent receives its own identity, which improves visibility and auditability across your security stack. Requiring a human sponsor to govern an agent’s identity and lifecycle helps prevent orphaned agents and preserves accountability as agents and teams evolve. You can even automate lifecycle actions to onboard and retire agents. With Conditional Access policies, you can block risky agents and set guardrails for least privilege and just in time access to resources.

To govern how employees use agents and to prevent misuse, you can turn to Microsoft Entra Internet Access, included in Microsoft Entra Suite. It’s now a secure web and AI gateway that works with Microsoft Defender to help you discover use of unsanctioned private apps, shadow IT, generative AI, and SaaS apps. It also protects against prompt injection attacks and prevents data exfiltration by integrating network filtering with Microsoft Purview classification policies.

When you have observability into everything that traverses your network, you can embrace AI confidently while ensuring that agents operate safely, responsibly, and in line with organizational policy.

Where to learn more: Get started with Microsoft Entra Agent ID and Microsoft Entra Suite.

3. Extend Zero Trust principles everywhere with an integrated Access Fabric security solution

There’s often a gap between what your identity system can see and what’s happening on the network. That’s why our next recommendation is to unify the identity and network access layers of your Zero Trust architecture, so they can share signals and reinforce each other’s strengths through a unified policy engine. This gives you deeper visibility into and finer control over every user session.

Today, enterprise organizations juggle an average of five different identity solutions and four different network access solutions, usually from multiple vendors.1 Each solution enforces access differently with disconnected policies that limit visibility across identity and network layers. Cyberattackers are weaponizing AI to scale phishing campaigns and automate intrusions to exploit the seams between these siloed solutions, resulting in more breaches.2

An access security platform that integrates context from identity, network, and endpoints creates a dynamic safety net—an Access Fabric—that surrounds every digital interaction and helps keep organizational resources secure. An Access Fabric solution wraps every connection, session, and resource in consistent, intelligent access security, wherever work happens—in the cloud, on-premises, or at the edge. Because it reasons over context from identity, network, devices, agents, and other security tools, it determines access risk more accurately than an identity-only system. It continuously re‑evaluates trust across authentication and network layers, so it can enforce real‑time, risk‑based access decisions beyond first sign‑in.

Microsoft Entra delivers integrated access security across AI and SaaS apps, internet traffic, and private resources by bringing identity and network access controls together under a unified Zero Trust policy engine, Microsoft Entra Conditional Access. It continuously monitors user and network risk levels. If any of those risk levels change, it enforces policies that adapt in real time, so you can block access for users, apps, and even AI agents before they cause damage.

Your security teams can set policies in one central place and trust Entra to enforce them everywhere. The same adaptive controls protect human users, devices, and AI agents wherever they move, closing access security gaps while reducing the burden of managing multiple policies across multiple tools.

Where to learn more: Read our Access Fabric blog and learn more in our new four-part webinar series.

4. Strengthen your identity and access foundation to start secure and stay secure

To address modern cyberthreats, you need to start from a secure baseline—anchored in phishing‑resistant credentials and strong identity proofing—so only the right person can access your environment at every step of authentication and recovery.

A baseline security model sets minimum guardrails for identity, access, hardening, and monitoring. These guardrails include must-have controls, like those in security defaults, Microsoft-managed Conditional Access policies, or Baseline Security Mode in Microsoft 365. This approach includes moving away from easily compromised credentials like passwords and adopting passkeys to balance security with a fast, familiar sign-in experience. Equally important is high‑assurance account recovery and onboarding that combines a government‑issued ID with a biometric match to ensure that no bad actors or AI impersonators gain access.

Microsoft Entra makes it easy to implement these best practices. You can require phishing‑resistant credentials for any account accessing your environment and tailor passkey policies based on risk and regulatory needs. For example, admins or users in highly regulated industries can be required to use device‑bound passkeys such as physical security keys or Microsoft Authenticator, while other worker groups can use synced passkeys for a simpler experience and easier recovery. At a minimum, protect all admin accounts with phishing‑resistant credentials included in Microsoft Entra ID. You can even require new employees to set up a passkey before they can access anything. With Microsoft Entra Verified ID, you can add a live‑person check and validate government‑issued ID for both onboarding and account recovery.

Combining access control policies with device compliance, threat detection, and identity protection will further fortify your foundation. 

Where to learn more: Read our latest blog on passkeys and account recovery with Verified ID and learn how you can enable passkeys for your organization.

Support your identity and network access priorities with Microsoft

The plan for 2026 is straightforward: use AI to automate protection at speed and scale, protect the AI and agents your teams use to boost productivity, extend Zero Trust principles with an Access Fabric solution, and strengthen your identity security baseline. These measures will give your organization the resilience it needs to move fast without compromise. The threats will keep evolving—but you can tip the scales in your favor against increasingly sophisticated cyberattackers.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Secure employee access in the age of AI report, Microsoft.

2Microsoft Digital Defense Report 2025.

The post Four priorities for AI-powered identity and network access security in 2026 appeared first on Microsoft Security Blog.

]]>