Data security Insights | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/topic/data-security/ Expert coverage of cybersecurity topics Fri, 20 Mar 2026 22:56:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Secure agentic AI end-to-end http://approjects.co.za/?big=en-us/security/blog/2026/03/20/secure-agentic-ai-end-to-end/ Fri, 20 Mar 2026 16:00:00 +0000 In this agentic era, security must be woven into, and around, every layer of the AI estate. At RSAC 2026, we are delivering on that vision with new purpose-built capabilities designed to help organizations secure agents, secure their foundations, and defend using agents and experts.

The post Secure agentic AI end-to-end appeared first on Microsoft Security Blog.

]]>
Next week, RSAC™ Conference celebrates its 35-year anniversary as a forum that brings the security community together to address new challenges and embrace opportunities in our quest to make the world a safer place for all. As we look towards that milestone, agentic AI is reshaping industries rapidly as customers transform to become Frontier Firms—those anchored in intelligence and trust and using agents to elevate human ambition, holistically reimagining their business to achieve their highest aspirations. Our recent research shows that 80% of Fortune 500 companies are already using agents.1

At the same time, this innovation is happening against a sea change in AI-powered attacks where agents can become “double agents.” And chief information officers (CIOs), chief information security officers (CISOs), and security decision makers are grappling with the resulting security implications: How do they observe, govern, and secure agents? How do they secure their foundations in this new era? How can they use agentic AI to protect their organization and detect and respond to traditional and emerging threats?

The answer starts with trust, and security has always been the root of trust. In this agentic era, security must be woven into, and around, every layer of the AI estate. It must be ambient and autonomous, just like the AI it protects. This is our vision for security as the core primitive of the AI stack.

At RSAC 2026, we are delivering on that vision with new purpose-built capabilities designed to help organizations secure agents, secure their foundations, and defend using agents and experts. Fueled by more than 100 trillion daily signals, Microsoft Security helps protect 1.6 million customers, one billion identities, and 24 billion Copilot interactions.2 Read on to learn how we can help you secure agentic AI.

Secure agents

Earlier this month, we announced that Agent 365 will be generally available on May 1. Agent 365—the control plane for agents—gives IT, security, and business teams the visibility and tools they need to observe, secure, and govern agents at scale using the infrastructure you already have and trust. It includes new Microsoft Defender, Entra, and Purview capabilities to help you secure agent access, prevent data oversharing, and defend against emerging threats.

Agent 365 is included in Microsoft 365 E7: The Frontier Suite along with Microsoft 365 Copilot, Microsoft Entra Suite, and Microsoft 365 E5, which includes many of the advanced Microsoft Security capabilities below to deliver comprehensive protection for your organization.

Secure your foundations

Along with securing agents, we also need to think of securing AI comprehensively. To truly secure agentic AI, we must secure foundations—the systems that agentic AI is built and runs on and the people who are developing and using AI. At RSAC 2026, we are introducing new capabilities to help you gain visibility into risks across your enterprise, secure identities with continuous adaptive access, safeguard sensitive data across AI workflows, and defend against threats at the speed and scale of AI.

Gain visibility into risks across your enterprise

As AI adoption accelerates, so does the need for comprehensive and continuous visibility into AI risks across your environment—from agents to AI apps and services. We are addressing this challenge with new capabilities that give you insight into risks across your enterprise so you know where AI is showing up, how it is being used, and where your exposure to risk may be growing. New capabilities include:

  • Security Dashboard for AI provides CISOs and security teams with unified visibility into AI-related risk across the organization. Now generally available.
  • Entra Internet Access Shadow AI Detection uses the network layer to identify previously unknown AI applications and surface unmanaged AI usage that might otherwise go undetected. Generally available March 31.
  • Enhanced Intune app inventory provides rich visibility into your app estate installed on devices, including AI-enabled apps, to support targeted remediation of high-risk software. Generally available in May.

Secure identities with continuous, adaptive access

Identity is the foundation of modern security, the most targeted layer in any environment, and the first line of defense. With Microsoft Entra, you can secure access and deliver comprehensive identity security using new capabilities that help you harden your identity infrastructure, improve tenant governance, modernize authentication, and make intelligent access decisions.

  • Entra Backup and Recovery strengthens resilience with an automated backup of Entra directory objects to enable rapid recovery in case of accidental data deletion or unauthorized changes. Now available in preview.
  • Entra Tenant Governance helps organizations discover unmanaged (shadow) Entra tenants and establish consistent tenant policies and governance in multi-tenant environments. Now available in preview.
  • Entra passkey capabilities now include synced passkeys and passkey profiles to enable maximum flexibility for end-users, making it easy to move between devices, while organizations looking for maximum control still have the option of device-bound passkeys. Plus, Entra passkeys are now natively integrated into the Windows Hello experience, making phishing-resistant passkey authentication more seamless on Windows devices. Synced passkeys and passkey profiles are generally available, passkey integration into Windows Hello is in preview. 
  • Entra external Multi-Factor Authentication (MFA) allows organizations to connect external MFA providers directly with Microsoft Entra so they can leverage pre-existing MFA investments or use highly specialized MFA methods. Now generally available.
  • Entra adaptive risk remediation helps users securely regain access without help-desk friction through automatic self-remediation across authentication methods, adapting to where they are in their modern authentication journey. Generally available in April.
  • Unified identity security provides end-to-end coverage across identity infrastructure, the identity control plane, and identity threat detection and response (ITDR)—built for rapid response and real-time decisions. The new identity security dashboard in Microsoft Defender highlights the most impactful insights across human and non-human identities to help accelerate response, and the new identity risk score unifies account-level risk signals to deliver a comprehensive view of user risk to inform real-time access decisions and SecOps investigations. Now available in preview.

Safeguard sensitive data across AI workflows

With AI embedded in everyday work, sensitive data increasingly moves through prompts, responses, and grounding flows—often faster than policies can keep up. Security teams need visibility into how AI interacts with data as well as the ability to stop data oversharing and data leakage. Microsoft brings data security directly into the AI control plane, giving organizations clear insight into risk, real-time enforcement at the point of use, and the confidence to enable AI responsibly across the enterprise. New Microsoft Purview capabilities include:

  • Expanded Purview data loss prevention for Microsoft 365 Copilot helps block sensitive information such as PII, credit card numbers, and custom data types in prompts from being processed or used for web grounding. Generally available March 31.
  • Purview embedded in Copilot Control System provides a unified view of AI‑related data risk directly in the Microsoft 365 Admin Center. Generally available in April.
  • Purview customizable data security reports enable tailored reporting and drilldowns to prioritized data security risks. Available in preview March 31.

Defend against threats across endpoints, cloud, and AI services

Security teams need proactive 24/7 threat protection that disrupts threats early and contains them automatically. Microsoft is extending predictive shielding to proactively limit impact and reduce exposure, expanding our container security capabilities, and introducing network-layer protection against malicious AI prompts.

  • Entra Internet Access prompt injection protection helps block malicious AI prompts across apps and agents by enforcing universal network-level policies. Generally available March 31.
  • Enhanced Defender for Cloud container security includes binary drift and antimalware prevention to close gaps attackers exploit in containerized environments. Now available in preview.
  • Defender for Cloud posture management adds broader coverage and supports Amazon Web Services and Google Cloud Platform, delivering security recommendations and compliance insights for newly discovered resources. Available in preview in April.
  • Defender predictive shielding dynamically adjusts identity and access policies during active attacks, reducing exposure and limiting impact. Now available in preview.

Defend with agents and experts

To defend in the agentic age, we need agentic defense. This means having an agentic defense platform and security agents embedded directly into the flow of work, augmented by deep human expertise and comprehensive security services when you need them.

Agents built into the flow of security work

Security teams move fastest with targeted help where and when work is happening. As alerts surface and investigations unfold across identities, data, endpoints, and cloud workloads, AI-powered assistance needs to operate alongside defenders. With Security Copilot now included in Microsoft 365 E5 and E7, we are empowering defenders with agents embedded directly into daily security and IT operations that help accelerate response and reduce manual effort so they can focus on what matters most.

New agents available now include:

  • Security Analyst Agent in Microsoft Defender helps accelerate threat investigations by providing contextual analysis and guided workflows. Available in preview March 26.
  • Security Alert Triage Agent in Microsoft Defender has the capabilities of the phishing triage agent and then extends to cloud and identity to autonomously analyze, classify, prioritize, and resolve repetitive low-value alerts at scale. Available in preview in April.
  • Conditional Access Optimization Agent in Microsoft Entra enhancements add context-aware recommendations, deeper analysis, and phased rollout to strengthen identity security. Agent generally available, enhancements now available in preview.
  • Data Security Posture Agent in Microsoft Purview enhancements include a credential scanning capability that can be used to proactively detect credential exposure in your data. Now available in preview.
  • Data Security Triage Agent in Microsoft Purview enhancements include an advanced AI reasoning layer and improved interpretation of custom Sensitive Information Types (SITs), to improve agent outputs during alert triage. Agent generally available, enhancements available in preview March 31.
  • Over 15 new partner-built agents extend Security Copilot with additional capabilities, all available in the Security Store.

Scale with an agentic defense platform

To help defenders and agents work together in a more coordinated, intelligence-driven way, Microsoft is expanding Sentinel, the agentic defense platform, to unify context, automate end-to-end workflows, and standardize access, governance, and deployment across security solutions.

  • Sentinel data federation powered by Microsoft Fabric investigates external security data in place in Databricks, Microsoft Fabric, and Azure Data Lake Storage while preserving governance. Now available in preview.
  • Sentinel playbook generator with natural language orchestration helps accelerate investigations and automate complex workflows. Now available in preview.
  • Sentinel granular delegated administrator privileges and unified role-based access control enable secure and scaling management for partners and enterprise customers with cross-tenant collaboration. Now available in preview.
  • Security Store embedded in Purview and Entra makes it easier to discover and deploy agents directly within existing security experiences. Generally available March 31.
  • Sentinel custom graphs powered by Microsoft Fabric enable views unique to your organization of relationships across your environment. Now available in preview.
  • Sentinel model context protocol (MCP) entity analyzer helps automate faster with natural language and harnesses the flexibility of code to accelerate responses. Generally available in April.

Strengthen with experts

Even the most mature security organizations face moments that call for deeper partnership—a sophisticated attack, a complex investigation, a situation where seasoned expertise alongside your team makes all the difference. The Microsoft Defender Experts Suite brings together expert-led services—technical advisory, managed extended detection and response (MXDR), and end-to-end proactive and reactive incident response—to help you defend against advanced cyber threats, build long-term resilience, and modernize security operations with confidence.

Apply Zero Trust for AI

Zero Trust has always been built on three principles: verify explicitly, use least privilege, and assume breach. As AI becomes embedded across your entire environment—from the models you build on, to the data they consume, to the agents that act on your behalf—applying those principles has never been more critical. At RSAC 2026, we’re extending our Zero Trust architecture, the full AI lifecycle—from data ingestion and model training to deployment agent behavior. And we’re making it actionable with an updated Zero Trust for AI reference architecture, workshop, assessment tool, and new patterns and practices articles to help you improve your security posture.

See you at RSAC

If you’re joining the global security community in San Francisco for RSAC 2026 Conference, we invite you to connect with us. Join us at our Microsoft Pre-Day event and stop by our booth at the RSAC Conference North Expo (N-5744) to explore our latest innovations across Microsoft Agent 365, Microsoft Defender, Microsoft Entra, Microsoft Purview, Microsoft Sentinel, and Microsoft Security Copilot and see firsthand how we can help your organization secure agents, secure your foundation, and help you defend with agents and experts. The future of security is ambient, autonomous, and built for the era of AI. Let’s build it together.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Based on Microsoft first-party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

2Microsoft Fiscal Year 2026 First Quarter Earnings Conference Call and Microsoft Fiscal Year 2026 Second Quarter Earnings Conference Call

The post Secure agentic AI end-to-end appeared first on Microsoft Security Blog.

]]>
New Microsoft Purview innovations for Fabric to safely accelerate your AI transformation https://techcommunity.microsoft.com/blog/microsoft-security-blog/new-microsoft-purview-innovations-for-fabric-to-safely-accelerate-your-ai-transf/4502156 Mon, 16 Mar 2026 17:10:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=145841 As organizations adopt AI, security and governance remain core primitives for safe AI transformation and acceleration.

The post New Microsoft Purview innovations for Fabric to safely accelerate your AI transformation appeared first on Microsoft Security Blog.

]]>
As organizations adopt AI, security and governance remain core primitives for safe AI transformation and acceleration. After all, data leaders are aware of the notion that:

Your AI is only as good as your data.

Organizations are skeptical about AI transformation due to concerns of sensitive data oversharing and poor data quality. In fact, 86% of organizations lack visibility into AI data flows, operating in darkness about what information employees share with AI systems [1]. Compounding on this challenge, about 67% of executives are uncomfortable using data for AI due to quality concerns [2]. The challenges of data oversharing and poor data quality requires organizations to solve these issues seamlessly for the safe usage of AI. Microsoft Purview offers a modern, unified approach to help organizations secure and govern data across their entire data estate, in particular best in class integrations with M365, Microsoft Fabric, and Azure data estates, streamlining oversight and reducing complexity across the estate.

At FabCon Atlanta, we’re announcing new Microsoft Purview innovations for Fabric to help seamlessly secure and confidently activate your data for AI transformation. These updates span data security and data governance, granting Fabric users to both

  1. Discover risks and prevent data oversharing in Fabric
  2. Improve governance processes and data quality across their data estate

1. Discover risks and prevent data oversharing in Fabric

As data volume increases with AI usage, Microsoft Purview secures your data with capabilities such as Information Protection, Data Loss Prevention (DLP), Insider Risk Management (IRM), and Data Security Posture Management (DSPM). These capabilities work together to secure data throughout its lifecycle and now specifically for your Fabric data estate. Here are a few new Purview innovations for your Fabric estate:

Microsoft Purview DLP policies to prevent data leakage for Fabric Warehouse and KQL/SQL DBs

Now generally available, Microsoft Purview DLP policies allow Fabric admins to prevent data oversharing in Fabric through policy tip triggering when sensitive data is detected in assets uploaded to Warehouses. Additionally, in preview, Purview DLP enables Fabric admins to restrict access to assets with sensitive data in KQL/SQL DBs and Fabric Warehouses to prevent data oversharing. This helps admins limit access to sensitive data detected in these data sources and data stores to just asset owners and allowed collaborators. These DLP innovations expand upon the depth and breadth of existing DLP policies to ensure sensitive data in Fabric is protected.

Figure 1. DLP restrict access preventing data oversharing of customer information stored in a KQL database.

Microsoft Purview Insider Risk Management (IRM) indicators for Lakehouse, IRM data theft quick policy for Fabric, and IRM pay-as-you-go usage report for Fabric

Microsoft Purview Insider Risk Management is now generally available for Microsoft Fabric extending its risk-detection capabilities to Microsoft Fabric lakehouses (in addition to Power BI which is supported today) by offering ready-to-use risk indicators based on risky user activities in Fabric lakehouses, such as sharing data from a Fabric lakehouse with people outside the organization . Additionally, IRM data theft policy is now generally available for security admins to create a data theft policy to detect Fabric data exfiltration, such as exporting Power BI reports. Also, organizations now have visibility into how much they are billed with the IRM pay-as-you-go usage report for Fabric, providing customers with an easy-to-use dashboard to track their consumption and predictability on costs.

Figure 2. IRM identifying risky user behavior when handling data in a Fabric Lakehouse. 

Figure 3. Security admins can create a data theft policy to detect Fabric data exfiltration. 

Figure 4. Security admins can check the pay-as-you-go usage (processing units) across different workloads and activities such as the downgrading of sensitivity labels of a lakehouse through the usage report.

Microsoft Purview for all Fabric Copilots and Agents

Microsoft Purview currently provides capabilities in preview for all Copilots and Agents in Fabric. Organizations can:

  • Discover data risks such as sensitive data in user prompts and responses and receive recommended actions to reduce these risks.
  • Detect and remediate oversharing risks with Data Risk Assessments on DSPM, that identify potentially overshared, unprotected, or sensitive Fabric assets, giving teams clear visibility into where data exposure exists and enabling targeted actions—like applying labels or policies—to reduce risk and ensure Fabric data is AI‑ready and governed by design.
  • Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI.
  • Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant usage detection.

Figure 5. Purview DSPM provides admins with the ability to discover data risks such as a user’s attempt to obtain historical data within a data agent in the Data Science workload in Fabric. DSPM subsequently provides actions to solve this risk.

Now that we’ve covered how Purview helps secure Fabric data and AI, the next focus is ensuring Fabric users can use that data responsibly.

2. Improve governance processes and data quality across their data estate

Once an organization’s data is secured for AI, the next challenge is ensuring consumers can easily find and trust the data needed for AI. This is where the Purview Unified Catalog comes in, serving as the foundation for enterprise data governance. Estate-wide data discovery provides a holistic view of the data landscape, helping prevent valuable data from being underutilized. Built-in data quality tools enable teams to measure, monitor, and remediate issues such as incomplete records, inconsistencies, and redundancies, ensuring decisions and AI outcomes are based on trusted, reliable data.  Purview provides additional governance capabilities for all data consumers and governance teams and supplement those who utilize the Fabric OneLake catalog. Here are a few new innovations within the Purview Unified Catalog:

Publication workflows for data products and glossary terms

Now generally available, data owners can leverage Workflows in the Purview Unified Catalog to manage how data products and glossary terms are published. Customizable workflows enable governance teams to work faster to create a well curated catalog, specifically by ensuring that data products and glossary terms are published and governed responsibly. Data consumers can request access to data products and be reassured that the data is held to a certain governance standard by governance teams.

Figure 6. Customizing a Workflow for publishing a glossary term in your catalog.

Data quality for ungoverned assets in the Unified Catalog, including Fabric data  

In the Unified Catalog, Data Quality for ungoverned data assets allows organizations to run data quality on data assets, including Fabric assets, without linking them to data products. This approach enables data quality stewards to run data quality at a faster speed and on greater scale, ensuring that their organizations can democratize high quality data for AI use cases.

Figure 7.  Running data quality on data assets without it being associated with a data product.

Looking Forward

As organizations accelerate their AI ambitions, data security and governance become essential. Microsoft Purview and Microsoft Fabric deliver an integrated and unified foundation that enables organizations to innovate with confidence, ensuring data is protected, governed, and trusted for responsible AI activation.

We’re committed to helping you stay ahead of evolving challenges and opportunities as you unlock more value from your data. Explore these new capabilities and join us on the journey toward a more secure, governed, and AI‑ready data future.

[1] 2025 AI Security Gap: 83% of Organizations Flying Blind

[2] The Importance Of Data Quality: Metrics That Drive Business Success

The post New Microsoft Purview innovations for Fabric to safely accelerate your AI transformation appeared first on Microsoft Security Blog.

]]>
Secure agentic AI for your Frontier Transformation http://approjects.co.za/?big=en-us/security/blog/2026/03/09/secure-agentic-ai-for-your-frontier-transformation/ Mon, 09 Mar 2026 13:00:00 +0000 We are announcing the next step to make Frontier Transformation real for customers across every industry with Wave 3 of Microsoft 365 Copilot, Microsoft Agent 365, and Microsoft 365 E7: The Frontier Suite.

The post Secure agentic AI for your Frontier Transformation appeared first on Microsoft Security Blog.

]]>
Today we shared the next step to make Frontier Transformation real for customers across every industry with Wave 3 of Microsoft 365 Copilot, Microsoft Agent 365, and Microsoft 365 E7: The Frontier Suite.

As our customers rapidly embrace agentic AI, chief information officers (CIOs), chief information security officers (CISOs), and security decision makers are asking urgent questions: How do I track and monitor all these agents? How do I know what they are doing? Do they have the right access? Can they leak sensitive data? Are they protected from cyberthreats? How do I govern them?

Agent 365 and Microsoft 365 E7: The Frontier Suite, generally available on May 1, 2026, are designed to help answer these questions and give organizations the confidence to go further with AI.

Agent 365—the control plane for agents

As organizations adopt agentic AI, growing visibility and security gaps can increase the risk of agents becoming double agents. Without a unified control plane, IT, security, and business teams lack visibility into which agents exist, how they behave, who has access to them, and what potential security risks exist across the enterprise. With Microsoft Agent 365 you now have a unified control plane for agents that enables IT, security, and business teams to work together to observe, govern, and secure agents across your organization—including agents built with Microsoft AI platforms and agents from our ecosystem partners—using new Microsoft Security capabilities built into their existing flow of work.

Here is what that looks like in practice:

As we are now running Agent 365 in production, Avanade has real visibility into agent activity, the ability to govern agent sprawl, control resource usage, and manage agents as identity-aware digital entities in Microsoft Entra. This significantly reduces operational and security risk, represents a critical step forward in operationalizing the agent lifecycle at scale, and underscores Microsoft’s commitment to responsible, production-ready AI.

—Aaron Reich, Chief Technology and Information Officer, Avanade

Key Agent 365 capabilities include:

Observability for every role

With Agent 365, IT, security, and business teams gain visibility into all Agent 365 managed agents in their environment, understand how they are used, and can act quickly on performance, behavior, and risk signals relevant to their role—from within existing tools and workflows.

  • Agent Registry provides an inventory of agents in your organization, including agents built with Microsoft AI platforms, ecosystem partner agents, and agents registered through APIs. This agent inventory is available to IT teams in the Microsoft 365 admin center. Security teams see the same unified agent inventory in their existing Microsoft Defender and Purview workflows.
  • Agent behavior and performance observability provides detailed reports about agent performance, adoption and usage metrics, an agent map, and activity details.
  • Agent risk signals across Microsoft Defender*, Entra, and Purview* help security teams evaluate agent risk—just like they do for users—and block agent actions based on agent compromise, sign-in anomalies, and risky data interactions. Defender assesses risk of agent compromise, Entra evaluates identity risk, and Purview evaluates insider risk. IT also has visibility into these risks in the Microsoft 365 admin center.
  • Security policy templates, starting with Microsoft Entra, automate collaboration between IT and security. They enable security teams to define tenant-wide security policies that IT leaders can then enforce in the Microsoft 365 admin center as they onboard new agents.

*These capabilities are in public preview and will continue to be on May 1.

Secure and govern agent access

Unmanaged agents may create significant risk, from accessing resources unchecked to accumulating excessive privileges and being misused by malicious actors. With Microsoft Entra capabilities included in Agent 365, you can secure agent identities and their access to resources.

  • Agent ID gives each agent a unique identity in Microsoft Entra, designed specifically for the needs of agents. With Agent ID, organizations can apply trusted access policies at scale, reduce gaps from unmanaged identities, and keep agent access aligned to existing organizational controls.
  • Identity Protection and Conditional Access for agents extend existing user policies that make real-time access decisions based on risks, device compliance from Microsoft Intune, and custom security attributes to agents working on behalf of a user. These policies help prevent compromise and help ensure that agents cannot be misused by malicious actors.
  • Identity Governance for agents enables identity leaders to limit agent access to only resources they need, with access packages that can be scoped to a subset of the users permissions, and includes the ability to audit access granted to agents.

Prevent data oversharing and ensure agent compliance

Microsoft Purview capabilities in Agent 365 provide comprehensive data security and compliance coverage for agents. You can protect agents from accessing sensitive data, prevent data leaks from risky insiders, and help ensure agents process data responsibly to support compliance with global regulations.

  • Data Security Posture Management provides visibility and insights into data risks for agents so data security admins can proactively mitigate those risks.
  • Information Protection helps ensure that agents inherit and honor Microsoft 365 data sensitivity labels so that they follow the same rules as users for handling sensitive data to prevent agent-led sensitive data leaks.
  • Inline Data Loss Prevention (DLP) for prompts to Microsoft Copilot Studio agents blocks sensitive information such as personally identifiable information, credit card numbers, and custom sensitive information types (SITs) from being processed in the runtime.
  • Insider Risk Management extends insider risk protection to agents to help ensure that risky agent interactions with sensitive data are blocked and flagged to data security admins.
  • Data Lifecycle Management enables data retention and deletion policies for prompts and agent-generated data so you can manage risk and liability by keeping the data that you need and deleting what you don’t.  
  • Audit and eDiscovery extend core compliance and records management capabilities to agents, treating AI agents as auditable entities alongside users and applications. This will help ensure that organizations can audit, investigate, and defensibly manage AI agent activity across the enterprise.
  • Communication Compliance extends to agent interactions to detect and enable human oversight of risky AI communications. This enables business leaders to extend their code of conduct and data compliance policies to AI communications.

Defend agents against emerging cyberthreats

To help you stay ahead of emerging cyberthreats, Agent 365 includes Microsoft Defender protections purpose-built to detect and mitigate specific AI vulnerabilities and threats such as prompt manipulation, model tampering, and agent-based attack chains.

  • Security posture management for Microsoft Foundry and Copilot Studio agents* detects misconfigurations and vulnerabilities in agents so security leaders can stay ahead of malicious actors by proactively resolving them before they become an attack vector.
  • Detection, investigation, and response for Foundry and Copilot Studio agents* enables the investigation and remediation of attacks that target agents and helps ensure that agents are accounted for in security investigations.
  • Runtime threat protection, investigation, and hunting** for agents that use the Agent 365 tools gateway, helps organizations detect, block, and investigate malicious agent activities.

Agent 365 will be generally available on May 1, 2026, and priced at $15 per user per month. Learn more about Agent 365.

*These capabilities are in public preview and will continue to be on May 1.

**This new capability will enter public preview in April 2026 and continue to be on May 1.

Microsoft 365 E7: The Frontier Suite

Microsoft 365 E7 brings together intelligence and trust to enable organizations to accelerate Frontier Transformation, equipping employees with AI across email, documents, meetings, spreadsheets, and business application surfaces. It also gives IT and security leaders the observability and governance needed to operate AI at enterprise scale.

Microsoft 365 E7 includes Microsoft 365 Copilot, Agent 365, Microsoft Entra Suite, and Microsoft 365 E5 with advanced Defender, Entra, Intune, and Purview security capabilities to help secure users, delivering comprehensive protection across users and agents. It will be available for purchase on May 1, 2026, at a retail price of $99 per user per month. Learn more about Microsoft 365 E7.

End-to-end security for the agentic era

Frontier Transformation is anchored in intelligence and trust, and trust starts with security. Microsoft Security capabilities help protect 1.6 million customers at the speed and scale of AI.1 With Agent 365, we are extending these enterprise-grade capabilities so organizations can observe, secure, and govern agents and delivering comprehensive protection across agents and users with Microsoft 365 E7.

Secure your Frontier Transformation today with Agent 365 and Microsoft 365 E7: The Frontier Suite. And join us at RSAC Conference 2026 to learn more about these new solutions and hear from industry experts and customers who are shaping how agents can be observed, governed, secured, and trusted in the real world.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Fiscal Year 2026 Second Quarter Earnings Conference Call.

The post Secure agentic AI for your Frontier Transformation appeared first on Microsoft Security Blog.

]]>
New e-book: Establishing a proactive defense with Microsoft Security Exposure Management http://approjects.co.za/?big=en-us/security/blog/2026/02/19/new-e-book-establishing-a-proactive-defense-with-microsoft-security-exposure-management/ Thu, 19 Feb 2026 17:00:00 +0000 Read the new maturity-based guide that helps organizations move from fragmented, reactive security practices to a unified exposure management approach that enables proactive defense.

The post New e-book: Establishing a proactive defense with Microsoft Security Exposure Management appeared first on Microsoft Security Blog.

]]>
Effective exposure management begins by illuminating and hardening risks across the entire attack surface. Some of the most meaningful shifts in security happen quietly—when teams take a clear look at their exposure landscape and acknowledge the gap between where they stand today and where they need to be. Today, we’re sharing a new guide designed to support that moment of clarity. It offers a practical, maturity-based path for moving from fragmented visibility and reactive fixes to a more unified, risk-driven approach that strengthens resilience one step at a time. Read “Establishing proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management” to learn more now. 

Five levels of exposure management maturity 

In the guide, you’ll learn how organizations progress through five levels of exposure management maturity to strengthen how they identify, prioritize, and act on risk. Early-stage teams operate reactively with limited visibility and compliance-driven fixes. As capabilities mature, processes become consistent, prioritization incorporates business context, and decisions shift from reactive to proactive. This progression reflects a move away from isolated security actions toward repeatable, measurable practices that scale with organizational complexity. At higher maturity, organizations validate controls, consolidate asset and risk data into a single source of truth, and confirm that mitigations work. Rather than assuming security improvements are effective, teams test and verify outcomes to ensure effort translates into real risk reduction. At the most advanced stage, exposure management is fully aligned to business objectives, supported by clear risk metrics, and used to guide remediation, resource allocation, and strategic outcomes.

The maturity model helps security leaders assess where their organization is at and identify practical next steps to mature and have a full-fledged exposure management program. Each level in the guide includes details on the realities organizations face, the key characteristics at each maturity level, common pain points, and suggestions for moving forward and up in maturity. Importantly, the model emphasizes that maturity is not static or final. The last stage of the maturity model, level five, isn’t a finish line—it’s the point where exposure management becomes a continuously evolving capability, fueled by real-time telemetry and adaptive risk modeling. At this stage, exposure management shifts from a program to a strategic discipline—one that informs long-term resilience decisions rather than discrete remediation cycles. 

The path to proactive defense  

Organizations build a unified path to proactive defense when they move beyond fragmented tools and adopt an integrated exposure management approach. By bringing assets, identities, cloud posture, and attack paths into one coherent view, security teams gain the clarity needed to focus effort where it matters most. This alignment enables more consistent action, stronger prioritization, and security decisions that reflect real business risk instead of isolated signals. It also helps teams move from chasing individual findings to managing exposure systematically, with shared context across security, IT, and risk stakeholders. Over time, this shift turns exposure management into a repeatable operating model rather than a collection of disconnected responses. 

Take the next step toward proactive defense 

Designed to help security leaders translate strategy into practical next steps, regardless of where they are starting, the maturity levels outlined in the e-book support organizations as they shift from reacting to cyberthreats to proactively reducing risk and strengthening security across every layer of the environment. To go deeper into the practices, maturity levels, and actions that matter most, read the new e-book: Establishing a proactive defense—A maturity-based guide for adopting a dynamic, risk-based approach to exposure management to learn more now. 

Join us at RSAC™ 2026

RSAC™ 2026 is more than a conference. It’s a chance to shape the future of security. By engaging with Microsoft Security, you’ll gain:  

  • Actionable insights from industry leaders and researchers.  
  • Hands-on experience with cutting-edge security tools.  
  • Connections that help you navigate the evolving cyberthreat landscape.  

Together, we can make the world safer for all. Join us in San Francisco March 22-26, 2026, and be part of the conversation that defines the next era of cybersecurity.  

Learn more

Learn more about Microsoft Security Exposure Management.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post New e-book: Establishing a proactive defense with Microsoft Security Exposure Management appeared first on Microsoft Security Blog.

]]>
80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/ Tue, 10 Feb 2026 16:00:00 +0000 Read Microsoft's new Cyber Pulse report for straightforward, practical insights and guidance on new cybersecurity risks.

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on Microsoft Security Blog.

]]>
Today, Microsoft is releasing the new Cyber Pulse report to provide leaders with straightforward, practical insights and guidance on new cybersecurity risks. One of today’s most pressing concerns is the governance of AI and autonomous agents. AI agents are scaling faster than some companies can see them—and that visibility gap is a business risk.1 Like people, AI agents require protection through strong observability, governance, and security using Zero Trust principles. As the report highlights, organizations that succeed in the next phase of AI adoption will be those that move with speed and bring business, IT, security, and developer teams together to observe, govern, and secure their AI transformation.

Agent building isn’t limited to technical roles; today, employees in various positions create and use agents in daily work. More than 80% of Fortune 500 companies today use AI active agents built with low-code/no-code tools.2 AI is ubiquitous in many operations, and generative AI-powered agents are embedded in workflows across sales, finance, security, customer service, and product innovation. 

With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place. AI agents should be held to the same standards as employees or service accounts. That means applying long‑standing Zero Trust security principles consistently:

  • Least privilege access: Give every user, AI agent, or system only what they need—no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, risk level.
  • Assume compromise can occur: Design systems expecting that cyberattackers will get inside.

These principles are not new, and many security teams have implemented Zero Trust principles in their organization. What’s new is their application to non‑human users operating at scale and speed. Organizations that embed these controls within their deployment of AI agents from the beginning will be able to move faster, building trust in AI.

The rise of human-led AI agents

The growth of AI agents expands across many regions around the world from the Americas to Europe, Middle East, and Africa (EMEA), and Asia.

A graph showing the percentages of the regions around the world using AI agents.

According to Cyber Pulse, leading industries such as software and technology (16%), manufacturing (13%), financial institutions (11%), and retail (9%) are using agents to support increasingly complex tasks—drafting proposals, analyzing financial data, triaging security alerts, automating repetitive processes, and surfacing insights at machine speed.3 These agents can operate in assistive modes, responding to user prompts, or autonomously, executing tasks with minimal human intervention.

A graphic showing the percentage of industries using agents to support complex tasks.
Source: Industry Agent Metrics were created using Microsoft first-party telemetry measuring agents build with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

And unlike traditional software, agents are dynamic. They act. They decide. They access data. And increasingly, they interact with other agents.

That changes the risk profile fundamentally.

The blind spot: Agent growth without observability, governance, and security

Despite the rapid adoption of AI agents, many organizations struggle to answer some basic questions:

  • How many agents are running across the enterprise?
  • Who owns them?
  • What data do they touch?
  • Which agents are sanctioned—and which are not?

This is not a hypothetical concern. Shadow IT has existed for decades, but shadow AI introduces new dimensions of risk. Agents can inherit permissions, access sensitive information, and generate outputs at scale—sometimes outside the visibility of IT and security teams. Bad actors might exploit agents’ access and privileges, turning them into unintended double agents. Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability. When leaders lack observability in their AI ecosystem, risk accumulates silently.

According to the Cyber Pulse report, already 29% of employees have turned to unsanctioned AI agents for work tasks.4 This disparity is noteworthy, as it indicates that numerous organizations are deploying AI capabilities and agents prior to establishing appropriate controls for access management, data protection, compliance, and accountability. In regulated sectors such as financial services, healthcare, and the public sector, this gap can have particularly significant consequences.

Why observability comes first

You can’t protect what you can’t see, and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organization (IT, security, developers, and AI teams) to understand:  

  • What agents exist 
  • Who owns them 
  • What systems and data they touch 
  • How they behave 

In the Cyber Pulse report, we outline five core capabilities that organizations need to establish for true observability and governance of AI agents:

  • Registry: A centralized registry acts as a single source of truth for all agents across the organization—sanctioned, third‑party, and emerging shadow agents. This inventory helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications. Least‑privilege permissions, enforced consistently, help ensure agents can access only the data, systems, and workflows required to fulfill their purpose—no more, no less.
  • Visualization: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behavior and impact—supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external cyberthreats. Security signals, policy enforcement, and integrated tooling help organizations detect compromised or misaligned agents early and respond quickly—before issues escalate into business, regulatory, or reputational harm.

Governance and security are not the same—and both matter

One important clarification emerging from Cyber Pulse is this: governance and security are related, but not interchangeable.

  • Governance defines ownership, accountability, policy, and oversight.
  • Security enforces controls, protects access, and detects cyberthreats.

Both are required. And neither can succeed in isolation.

AI governance cannot live solely within IT, and AI security cannot be delegated only to chief information security officers (CISOs). This is a cross functional responsibility, spanning legal, compliance, human resources, data science, business leadership, and the board.

When AI risk is treated as a core enterprise risk—alongside financial, operational, and regulatory risk—organizations are better positioned to move quickly and safely.

Strong security and governance do more than reduce risk—they enable transparency. And transparency is fast becoming a competitive advantage.

From risk management to competitive advantage

This is an exciting time for leading Frontier Firms. Many organizations are already using this moment to modernize governance, reduce overshared data, and establish security controls that allow safe use. They are proving that security and innovation are not opposing forces; they are reinforcing ones. Security is a catalyst for innovation.

According to the Cyber Pulse report, the leaders who act now will mitigate risk, unlock faster innovation, protect customer trust, and build resilience into the very fabric of their AI-powered enterprises. The future belongs to organizations that innovate at machine speed and observe, govern and secure with the same precision. If we get this right, and I know we will, AI becomes more than a breakthrough in technology—it becomes a breakthrough in human ambition.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Data Security Index 2026: Unifying Data Protection and AI Innovation, Microsoft Security, 2026.

2Based on Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

3Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

4July 2025 multi-national survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Group.

Methodology:

Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the past 28 days of November 2025. 

2026 Data Security Index: 

A 25-minute multinational online survey was conducted from July 16 to August 11, 2025, among 1,725 data security leaders. 

Questions centered around the data security landscape, data security incidents, securing employee use of generative AI, and the use of generative AI in data security programs to highlight comparisons to 2024. 

One-hour in-depth interviews were conducted with 10 data security leaders in the United States and United Kingdom to garner stories about how they are approaching data security in their organizations. 

Definitions: 

Active Agents are 1) deployed to production and 2) have some “real activity” associated with them in the past 28 days.  

“Real activity” is defined as 1+ engagement with a user (assistive agents) OR 1+ autonomous runs (autonomous agents).  

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on Microsoft Security Blog.

]]>
New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data http://approjects.co.za/?big=en-us/security/blog/2026/01/29/new-microsoft-data-security-index-report-explores-secure-ai-adoption-to-protect-sensitive-data/ Thu, 29 Jan 2026 17:00:00 +0000 The 2026 Microsoft Data Security Index explores one of the most pressing questions facing organizations today: How can we harness the power of generative while safeguarding sensitive data?

The post New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data appeared first on Microsoft Security Blog.

]]>
Generative AI and agentic AI are redefining how organizations innovate and operate, unlocking new levels of productivity, creativity and collaboration across industry teams. From accelerating content creation to streamlining workflows, AI offers transformative benefits that empower organizations to work smarter and faster. These capabilities, however, also introduce new dimensions of data risk—as AI adoption grows, so does the urgency for effective data security that keeps pace with AI innovation. In the 2026 Microsoft Data Security Index report, we explored one of the most pressing questions facing today’s organizations: How can we harness the power of AI while safeguarding sensitive data?

47% of surveyed organizations are​ implementing controls focused on generative AI workloads

To fully realize the potential of AI, organizations must pair innovation with responsibility and robust data security. This year, the Data Security Index report builds upon the responses of more than 1,700 security leaders to highlight three critical priorities for protecting organizational data and securing AI adoption:

  1. Moving from fragmented tools to unified data security.
  2. Managing AI-powered productivity securely.
  3. Strengthening data security with generative AI itself.

By consolidating solutions for better visibility and governance controls, implementing robust controls processes to protect data in AI-powered workflows, and using generative AI agents and automation to enhance security programs, organizations can build a resilient foundation for their next wave of generative AI-powered productivity and innovation. The result is a future where AI both drives efficiency and acts as a powerful ally in defending against data risk, unlocking growth without compromising protection.

In this article we will delve into some of the Data Security Index report’s key findings that relate to generative AI and how they are being operationalized at Microsoft. The report itself has a much broader focus and depth of insight.

1. From fragmented tools to unified data security

Many organizations still rely on disjointed tools and siloed controls, creating blind spots that hinder the efficacy of security teams. According to the 2026 Data Security Index, decision-makers cite poor integration, lack of a unified view across environments, and disparate dashboards as their top challenges in maintaining proper visibility and governance. These gaps make it harder to connect insights and respond quickly to risks—especially as data volumes and data environment complexity surge. Security leaders simply aren’t getting the oversight they need.

Why it matters
Consolidating tools into integrated platforms improves visibility, governance, and proactive risk management.

To address these challenges, organizations are consolidating tools, investing in unified platforms like Microsoft Purview that bring operations together while improving holistic visibility and control. These integrated solutions frequently outperform fragmented toolsets, enabling better detection and response, streamlined management, and stronger governance.

As organizations adopt new AI-powered technologies, many are also leaning into emerging disciplines like Microsoft Purview Data Security Posture Management (DSPM) to keep pace with evolving risks. Effective DSPM programs help teams identify and prioritize data‑exposure risks, detect access to sensitive information, and enforce consistent controls while reducing complexity through unified visibility. When DSPM provides proactive, continuous oversight, it becomes a critical safeguard—especially as AI‑powered data flows grow more dynamic across core operations.

More than 80% of surveyed organizations are implementing or developing DSPM strategies

We’re trying to use fewer vendors. If we need 15 tools, we’d rather not manage 15 vendor solutions. We’d prefer to get that down to five, with each vendor handling three tools.”

—Global information security director in the hospitality and travel industry

2. Managing AI-powered productivity securely

Generative AI is already influencing data security incident patterns: 32% of surveyed organizations’ data security incidents involve the use of generative AI tools. Understandably, surveyed security leaders have responded to this trend rapidly. Nearly half (47%) the security leaders surveyed in the 2026 Data Security Index are implementing generative AI-specific controls—an increase of 8% since the 2025 report. This helps enable innovation through the confident adoption of generative AI apps and agents while maintaining security.

A banner chart that says "32% of surveyed organizations' data security incidents involve use of AI tools."

Why it matters
Generative AI boosts productivity and innovation, but both unsanctioned and sanctioned AI tools must be managed. It’s essential to control tool use and monitor how data is accessed and shared with AI.

In the full report, we explore more deeply how AI-powered productivity is changing the risk profile of enterprises. We also explore several mechanisms, both technical and cultural, already helping maintain trust and reduce risk without sacrificing productivity gains or compliance.

3. Strengthening data security with generative AI

The 2026 Data Security Index indicates that 82% of organizations have developed plans to embed generative AI into their data security operations, up from 64% the previous year. From discovering sensitive data and detecting critical risks to investigating and triaging incidents, as well as refining policies, generative AI is being deployed for both proactive and reactive use cases at scale. The report explores how AI is changing the day-to-day operations across security teams, including the emergence of AI-assisted automation and agents.

alt text

Why it matters
Generative AI automates risk detection, scales protection, and accelerates response—amplifying human expertise while maintaining oversight.

Our generative AI systems are constantly observing, learning, and making recommendations for modifications with far more data than would be possible with any kind of manual or quasi-manual process.”

—Director of IT in the energy industry

Turning recommendations into action

As organizations confront the challenges of data security in the age of AI, the 2026 Data Security Index report offers three clear imperatives: unifying data security, increasing generative AI oversight, and using AI solutions to improve data security effectiveness.

  1. Unified data security requires continuous oversight and coordinated enforcement across your data estate. Achieving this scenario demands mechanisms that can discover, classify, and protect sensitive information at scale while extending safeguards to endpoints and workloads. Microsoft Purview DSPM operationalizes this principle through continuous discovery, classification, and protection of sensitive data across cloud, software as a service (SaaS), and on-premises assets.
  2. Responsible AI adoption depends on strict (but dynamic) controls and proactive data risk management. Organizations must enforce automated mechanisms that prevent unauthorized data exposure, monitor for anomalous usage, and guide employees toward sanctioned tools and responsible practices. Microsoft enforces these principles through governance policies supported by Microsoft Purview Data Loss Prevention and Microsoft Defender for Cloud Apps. These solutions detect, prevent, and respond to risky generative AI behaviors that increase the likelihood of data exposure, policy violations, or unsafe outputs, ensuring innovation aligns with security and compliance requirements.
  3. Modern security operations benefit from automation that accelerate detection and response alongside strong oversight. AI-powered agents can streamline threat investigation, recommend policies, and reduce manual workload while maintaining human oversight for accountability. We deliver this capability through Microsoft Security Copilot, embedded across Microsoft Sentinel, Microsoft Entra, Microsoft Intune, Microsoft Purview, and Microsoft Defender. These agents automate threat detection, incident investigation, and policy recommendations, enabling faster response and continuous improvement of security posture.

Stay informed, stay productive, stay protected

The insights we’ve covered here only scratch the surface of what the Microsoft Data Security Index reveals.The full report dives deeper into global trends, detailed metrics, and real-world perspectives from security leaders across industries and the globe. It provides specificity and context to help you shape your generative AI strategy with confidence.

If you want to explore the data behind these findings, see how priorities vary by region, and uncover actionable recommendations for secure AI adoption, read the full 2026 Microsoft Data Security Index to access comprehensive research, expert commentary, and practical guidance for building a security-first foundation for innovation.

Learn more

Learn more about the Microsoft Purview unified data security solutions.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data appeared first on Microsoft Security Blog.

]]>
Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms http://approjects.co.za/?big=en-us/security/blog/2026/01/14/microsoft-named-a-leader-in-idc-marketscape-for-unified-ai-governance-platforms/ Wed, 14 Jan 2026 17:00:00 +0000 Microsoft is honored to be named a Leader in the 2025–2026 IDC MarketScape for Unified AI Governance Platforms, highlighting our commitment to making AI innovation safe, responsible, and enterprise-ready.

The post Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms appeared first on Microsoft Security Blog.

]]>
As organizations rapidly embrace generative and agentic AI, ensuring robust, unified governance has never been more critical. That’s why Microsoft is honored to be named a Leader in the 2025-2026 IDC MarketScape for Worldwide Unified AI Governance Platforms (Vendor Assessment (#US53514825, December 2025). We believe this recognition highlights our commitment to making AI innovation safe, responsible, and enterprise-ready—so you can move fast without compromising trust or compliance.

A graphic showing Microsoft's position in the Leaders section of the IDC report.
Figure 1. IDC MarketScape vendor analysis model is designed to provide an overview of the competitive fitness of technology and suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each supplier’s position within a given market. The Capabilities score measures supplier product, go-to-market and business execution in the short term. The Strategy score measures alignment of supplier strategies with customer requirements in a three- to five-year timeframe. Supplier market share is represented by the size of the icons.

The urgency for a unified AI governance strategy is being driven by stricter regulatory demands, the sheer complexity of managing AI systems across multiple AI platforms and multicloud and hybrid environments, and leadership concerns for risk related to negative brand impact. Centralized, end-to-end governance platforms help organizations reduce compliance bottlenecks, lower operational risks, and turn governance into a strategic driver for responsible AI innovation. In today’s landscape, unified AI governance is not just a compliance obligation—it is critical infrastructure for trust, transparency, and sustainable business transformation.

Our own approach to AI is anchored to Microsoft’s Responsible AI standard, backed by a dedicated Office of Responsible AI. Drawing from our internal experience in building, securing, and governing AI systems, we translate these learnings directly into our AI management tools and security platform. As a result, customers benefit from features such as transparency notes, fairness analysis, explainability tools, safety guardrails, regulatory compliance assessments, agent identity, data security, vulnerability identification, and protection against cyberthreats like prompt-injection attacks. These tools enable them to develop, secure, and govern AI that aligns with ethical principles and is built to help support compliance with regulatory requirements. By integrating these capabilities, we empower organizations to make ethical decisions and safeguard their business processes throughout the entire AI lifecycle.

Microsoft’s AI Governance capabilities aim to provide integrated and centralized control for observability, management, and security across IT, developer, and security teams, ensuring integrated governance within their existing tools. Microsoft Foundry acts as our main control point for model development, evaluation, deployment, and monitoring, featuring a curated model catalog, machine learning oeprations, robust evaluation, and embedded content safety guardrails. Microsoft Agent 365, which was not yet available at the time of the IDC publication, provides a centralized control plane for IT, helping teams confidently deploy, manage, and secure their agentic AI published through Microsoft 365 Copilot, Microsoft Copilot Studio, and Microsoft Foundry.

Deeply embedded security systems are integral to Microsoft’s AI governance solution. Integrations with Microsoft Purview provide real-time data security, compliance, and governance tools, while Microsoft Entra provides agent identity and controls to manage agent sprawl and prevent unauthorized access to confidential resources. Microsoft Defender offers AI-specific posture management, threat detection, and runtime protection. Microsoft Purview Compliance Manager automates adherence to more than 100 regulatory frameworks. Granular audit logging and automated documentation bolster regulatory and forensic capabilities, enabling organizations in regulated industries to innovate with AI while maintaining oversight, secure collaboration, and consistent policy enforcement.

Guidance for security and governance leaders and CISOs

To empower organizations in advancing their AI transformation initiatives, it is crucial to focus on the following priorities for establishing a secure, well-governed, and scalable AI framework. The guidance below provides Microsoft’s recommendations for fulfilling these best practices:

CISO guidanceWhat it meansHow Microsoft delivers
Adopt a unified, end‑to‑end governance platformEstablish a comprehensive, integrated governance system covering traditional machine learning, generative AI, and agentic AI. Ensure unified oversight from development through deployment and monitoring.Microsoft enables observability and governance at every layer across IT, developer, and security teams to provide an integrated and cohesive governance platform that enables teams to play their part from within the tools they use. Microsoft Foundry acts as the developer control plane, connecting model development, evaluation, security controls, and continuous monitoring. Microsoft Agent 365 is the control plane for IT, enabling discovery, security, deployment, and observability for agentic AI in the enterprise. Microsoft Purview, Entra, and Defender integrate to deliver consistent full-stack governance across data, identity, threat protection, and compliance.
Industry‑leading responsible AI infrastructureImplement responsible AI practices as a foundational part of engineering and operations, with transparency and fairness built in.Microsoft embeds its Responsible AI Standards into our engineering processes, supported by the Office of Responsible AI. Automatic generation of model cards and built-in fairness mechanisms set Microsoft apart as a strategic differentiator, pairing technical controls with mature governance processes. Microsoft’s Responsible AI Transparency Report provides visibility to how we develop and deploy AI models and systems responsibility and provides a model for customers to emulate our best practices.
Advanced security and real‑time protectionProvide robust, real-time defense against emerging AI security threats, especially for regulated industries.Microsoft’s platform features real-time jailbreak detection, encrypted agent-to-agent communication, tamper-evident audit logs for model and agent actions, and deep integration with Defender to provide AI-specific threat detection, security posture management, and automated incident response capabilities. These capabilities are especially critical for regulated sectors.
Automated compliance at scaleAutomate compliance processes, enable policy enforcement throughout the AI lifecycle, and support audit readiness across hybrid and multicloud environments.Microsoft Purview streamlines compliance adherence for regulatory requirements and provides comprehensive support for hybrid and multicloud deployments—giving customers repeatable and auditable governance processes.

We believe we are differentiated in the AI governance space by delivering a unified, end-to-end platform that embeds responsible AI principles and robust security at every layer—from agents and applications to underlying infrastructure. Through native integration of Microsoft Foundry, Microsoft Agent 365, Purview, Entra, and Defender, organizations benefit from centralized oversight and observability across the layers of the organization with consistent protection and operationalized compliance across the AI lifecycle. Our comprehensive approach removes disparate and disconnected tooling, enabling organizations to build trustworthy, transparent, and secure AI solutions that can start secure and stay secure. We believe this approach uniquely differentiates Microsoft as a leader in operationalizing responsible, secure, and auditable AI at scale.

Strengthen your security strategy with Microsoft AI governance solutions

Agentic and generative AI are reshaping business processes, creating a new frontier for security and governance. Organizations that act early and prioritize governance best practices—unified governance platforms, build-in responsible AI tooling, and integrated security—will be best positioned to innovate confidently and maintain trust.

Microsoft approaches AI governance with a commitment to embedding responsible practices and robust security at every layer of the AI ecosystem. Our AI governance and security solutions empower customers with built-in transparency, fairness, and compliance tools throughout engineering and operations. We believe this approach allows organizations to benefit from centralized oversight, enforce policies consistently across the entire AI lifecycle, and achieve audit readiness—even in the rapidly changing landscape of generative and agentic AI.

Explore more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms appeared first on Microsoft Security Blog.

]]>
Microsoft Purview innovations for your Fabric data: Unify data security and governance for the AI era http://approjects.co.za/?big=en-us/security/blog/2025/09/16/microsoft-purview-innovations-for-your-fabric-data-unify-data-security-and-governance-for-the-ai-era/ Tue, 16 Sep 2025 16:00:00 +0000 The Microsoft Fabric and Purview teams are thrilled to participate in the European Microsoft Fabric Community Conference.

The post Microsoft Purview innovations for your Fabric data: Unify data security and governance for the AI era appeared first on Microsoft Security Blog.

]]>
The Microsoft Fabric and Purview teams are thrilled to participate in the European Microsoft Fabric Community Conference September 15-18, 2025, in Vienna, Austria. This event is Microsoft’s largest tech conference in Europe, where data professionals gather to connect and share insights on data, security, governance, and AI transformation. With more than 130 breakout sessions, 10 workshops, and two keynotes, the conference is a hub for exploring the future of data and AI.

AI innovation is transforming every industry, business process, and individual experience. As organizations adopt AI, one truth remains constant:

Your AI is only as good as your data

If poor quality, incomplete, biased, or sensitive data is fed into AI models, the results will be equally flawed, leading to sensitive data leaks and inaccurate predictions—both of which create potentially harmful outcomes and erode trust. High quality, governed, and secured data enables AI systems to deliver reliable insights and instill confidence in data usage and AI usage. Consider a team building an AI-powered customer service app. Without trustworthy data, the AI could give incorrect answers or expose sensitive information. In fact, about 99% of organizations have already experienced sensitive data exposure through AI tools, underscoring the urgent need for robust safeguards.1 Compounding this challenge, many companies address data security and governance in silos, using separate point solutions for each, and different tools across cloud platforms, which makes it harder to ensure data discovery, quality, and protection consistently.

As organizations prepare for an AI future, they require a comprehensive approach that solves both security and governance together. Microsoft Purview offers a modern, unified approach to help organizations secure and govern data across their heterogenous data estate. Purview consolidates security, governance, and compliance into a single solution. Purview also bridges different tools across different data sources like Microsoft Azure, Microsoft 365, and Microsoft Fabric, streamlining oversight and reducing complexity across the estate.

At FabCon Vienna, we are announcing new Microsoft Purview innovations for Fabric to help you seamlessly secure and confidently activate your data for AI. These updates span data security and data governance, allowing Fabric users to both

  1. Discover risks and protect data in Fabric
  2. Improve data discovery and quality across their Fabric estate

Discover risks and protect data

In today’s AI-powered world, data is both a powerful asset and a growing risk. Microsoft Purview helps organizations protect their data holistically by integrating Information Protection, Data Loss Prevention, Insider Risk Management, and Data Security Posture Management for AI. These tools work together to classify and secure sensitive data, prevent leaks, detect insider threats, and uncover AI-related risks. Paired with Microsoft Fabric, Purview builds upon existing data security such as OneLake Security while enabling innovation. Here are a few examples how Purview secures your Fabric estate:

Microsoft Purview Information Protection policies for Fabric items and Data Loss Prevention for structured data in OneLake

Now generally available, Microsoft Purview Information Protection policies allow Fabric users to manually label Fabric items, with access controls automatically enforced according to pre-defined protection policies set by administrators. Data Loss Prevention policies on structured data in OneLake is also now generally available, preventing data oversharing in Fabric through policy tip triggering when sensitive data is detected in assets. 

Microsoft Purview Insider Risk Management indicators for Power BI

Microsoft Purview Insider Risk Management is now generally available for Microsoft Fabric and extends its detection capabilities to Fabric by introducing built-in risk indicators for user activities in Power BI, such as viewing, downloading, exporting, and managing sensitivity labels for Power BI artifacts. These indicators can be applied directly to data theft and data leak policies, giving organizations stronger signals to spot suspicious behavior. By correlating signals across different activities, Insider Risk Management helps uncover potential insider threats such as intellectual property theft, unauthorized data sharing, or policy violations in Fabric.

Microsoft Purview Data Risk Assessments for Fabric

Within Purview’s Data Security Posture Management for AI, Data Risk Assessments will now support discovering overshared Fabric data (dashboards, reports, and more) in preview. Fabric customers will benefit from Data Risk Assessments by easily identifying what data is most at risk of leakage within Fabric. A default assessment will be created to identify overshared Fabric data in the top 100 accessed Fabric workspaces.

Microsoft Purview Data Security and Compliance controls for Copilot in Power BI

Microsoft Purview Data Security and Compliance controls for Copilot in Power BI are now generally available for Fabric users. Users can discover data risks, such as sensitive information in Copilot in Power BI’s prompts and responses, with actionable recommendations surfaced in Microsoft Purview Data Security Posture Management for AI reports. Users can also govern Copilot interactions using audit, eDiscovery, retention policies, and identifying non-compliant usage to support responsible AI usage.

Now that we’ve covered how Purview helps secure Fabric data, the next focus is to ensure that Fabric users can use that data.

Improve data discovery and quality across their Fabric estate

Once an organization’s data is well-protected, the next challenge is making sure Fabric data consumers can find and trust the data for AI and analytics projects. This is where the Microsoft Purview Unified Catalog comes in, as a foundation for data discovery, quality, and curation across your Fabric environment. The Unified Catalog acts as a lever for data activation: it brings together powerful tools to improve data visibility and quality so that your analysts, data scientists, and AI models can easily locate the right data and use it with confidence. Estate-wide data discovery provides a holistic view of your data landscape, so data is not underutilized. Data quality tools empower teams to measure, monitor, and remediate issues in your data such as incomplete rows and columns and redundant data so business decisions are made with confidence based on the accuracy and reliability of the data. Paired with Microsoft Fabric, Purview builds upon existing data governance capabilities in Fabric such as the OneLake catalog while enabling innovation. Here are a few examples of how:

Sub item metadata in Fabric Lakehouse for comprehensive visibility of your Fabric estate

In preview, Fabric data consumers can now view metadata at the table, column, and file level in Purview, ensuring each artifact is recorded at its most granular detail for in-depth data discovery.

Defining custom attributes for business concepts using language your data consumers will understand

In the Unified Catalog, you can define and apply custom attributes to your data assets, which fosters better organization and utilization of your data. Now in preview, custom attributes provide data practitioners with the ability to apply specific attributes to business concepts such as glossary terms, critical data elements and data products. For a Fabric customer, this ensures that data is easier to understand and is more discoverable for usage of data workloads and AI use cases.

Published error records in Fabric for analysis and remediation of data quality issues

Now in preview, Fabric users can identify the root causes of data quality errors directly where they work in Fabric OneLake, providing Fabric data consumers with a one stop shop for remediation of data for its use in analytics and AI.

These governance enhancements empower teams to use data with confidence. A protected dataset isn’t very useful if users neither know it exists nor if they don’t trust its accuracy. Unified Catalog ensures that data assets are more discoverable and trustworthy for Fabric users

Looking forward

As organizations embrace the transformative power of AI, the need for robust data security and governance has never been greater. Microsoft Purview and Microsoft Fabric provide a unified foundation that empowers organizations to innovate confidently, knowing their data is protected, governed, and ready for responsible AI activation. We are committed to helping you stay ahead of evolving challenges and opportunities and invite you to explore these new capabilities. Join us on the journey toward a more secure, governed, and innovative data future.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 


¹ Businesstechweekly.com, 99% of Organizations Expose Sensitive Data: The Security Risks of Uncontrolled AI Tools, May 28, 2025.

The post Microsoft Purview innovations for your Fabric data: Unify data security and governance for the AI era appeared first on Microsoft Security Blog.

]]>
Dow’s 125-year legacy: Innovating with AI to secure a long future http://approjects.co.za/?big=en-us/security/blog/2025/08/12/dows-125-year-legacy-innovating-with-ai-to-secure-a-long-future/ Tue, 12 Aug 2025 16:00:00 +0000 Microsoft recently spoke with Mario Ferket, Chief Information Security Officer for Dow, about the company’s approach to AI in security.

The post Dow’s 125-year legacy: Innovating with AI to secure a long future appeared first on Microsoft Security Blog.

]]>
Founded more than 125 years ago, Dow has demonstrated a commitment to leveraging science to make the world a better place. Today, Dow’s ambition to be the most innovative, inclusive, and sustainable materials science company is supported by a global security team dedicated to keeping employees, customers, and vast volumes of data safe and secure.

Dow’s security team, led by Chief Information Security Officer Mario Ferket, proactively covers everything from governance, risk, compliance, identity and access management, and information protection to data privacy while their team continues to mature and grow. With this comes a partnership with Microsoft Security using tools including Microsoft Security Copilot.

Microsoft recently spoke with Ferket on Dow’s approach to AI in security, establishing a responsible AI team, and how Security Copilot is acting as a mentor within their apprentice program.

MICROSOFT: How has your security team evolved in the past few years to incorporate AI into your business?

FERKET: AI at Dow is being viewed as a significant business enabler to better serve our customers with innovative and sustainable products. To use AI in a responsible way, we partnered with our Enterprise Data and Analytics team, Legal, and other departments to establish a responsible AI team.

This team was tasked with defining a set of principles as well as creating an acceptable use policy for generative AI as we rolled out Microsoft Copilot in the company. Beyond that, the new cross-functional team has been looking at the new risks associated with the use of AI and how to protect ourselves, our data, and our customers. The team is also exploring how AI can be leveraged to enhance our security operations and use “AI to fight AI” in instances in which AI is potentially used with malicious intentions.

A man in a suit with gray background
Pictured: Mario Ferket, Chief Information Security Officer, Dow

MICROSOFT: How is AI being integrated into Dow’s security efforts, and what specific capabilities are you leveraging?

FERKET: Our team is leveraging several AI- and machine learning-enabled capabilities to better detect and remove phishing emails, potential business email compromise (BEC) instances, and other malicious content sent to Dow through email.

For well over a year, we have been working with Microsoft in a design partnership to leverage Security Copilot as a key tool in the Dow Cyber Security Operations Center (CSOC). Given the sophistication and speed of cyberattacks, our original need was to eliminate repetitive manual tasks through automation and move to more automated interventions. This allows the team to spend more time on proactive activities. We are also using Security Copilot for threat hunting augmentation, automated incident summarization, and ticket enrichment by pulling indicators from intelligence services to provide context to the tickets being investigated, and generating queries to support threat hunting activities. We’ve found that this helps eliminate labor-intensive activities.

MICROSOFT: What impact has AI had on your team, and do you have any lessons learned from integrating AI into your security operations?

FERKET: Once the initial learning curve of Security Copilot was passed, the Dow CSOC quickly identified quick wins for leveraging the tool. It is now common in any investigation to hear the phrase “Have you asked Copilot?” for a wide variety of situations.

In the past, our Dow CSOC relied on extensive institutional knowledge within the team to know what “good” and “bad” looked like. Being able to query Security Copilot in natural language helps the team to quickly identify relevant information and act on it. The ability to leverage Security Copilot helps analysts to focus more on investigations and less on sifting through data. Before this level of automation, a member of my security team would have to manually source data from several sources to draw correlations and conclusions during an investigation. Now, when an alert trips, Security Copilot enriches alerts with contextualized data to support investigations. By using Security Copilot for incident summarization and enrichment, natural language search, and automation, the CSOC can decrease the time between when an alert fires and when action is taken.

Both Microsoft 365 Copilot and Security Copilot have become integral to the day-to-day operations of the CSOC, with analysts querying the tool several times a day for many reasons, ranging from data interpretation to ticket enrichment. Security Copilot enriches tickets with relevant data, cutting down the amount of time spent collating data. It has helped the Dow CSOC to automate the menial tasks of security investigations, allowing our more senior analysts to focus on proactive defensive measures. Our team has been surprised by how quickly we adopted the new capabilities and integrated them into our standard processes.

Within Dow, we also have an apprentice program with individuals from diverse backgrounds who are very often non-IT trained. Traditionally, it would take upwards of up a year of on-the-job training and job-shadowing of senior analysts for one of these apprentices to become “full” members of the team. Now, these apprentices can use Security Copilot as a “virtual mentor” for topics such as query building or learning the cyberthreat landscape, drastically decreasing the ramp time required for the apprentice to be productive and ensuring that senior analysts are able to focus on proactive defense.

MICROSOFT: What are the future directions and innovations you are considering in the field of AI and security, and how do you plan to implement them?

FERKET: Looking ahead, we’re exploring the use of advanced AI-powered capabilities to enhance detection of anomalies and patterns across large-scale telemetry. We’re also evaluating ways to streamline rule management through intelligent automation, aiming to reduce the manual overhead for our analysts. Another area of interest is dynamic prioritization of alerts, where contextual signals and threat intelligence can help refine response urgency. As always, we remain vigilant about the evolving use of AI by malicious actors and continue to assess its broader implications on the threat landscape.

MICROSOFT: What advice would you give to other security teams starting their AI journey?

FERKET: Be agile, but focused. AI is undoubtedly changing the cyber defense landscape, with many emergent tools being released regularly. It is easy to get lost in the “art of the possible” when it comes to AI tooling. Organizations starting their AI journey should be mindful of their core business objectives, the limitations of current AI capabilities, and be ready to pivot as things change rapidly. For the Dow CSOC, AI is seen as a great augmentation to help analysts be more effective and spend time on what really matters.

Microsoft Ignite 2025

Join us at Microsoft Ignite 2025 to explore our AI-first, end-to-end security platform, attend hands-on labs, get certified, connect with peers, and chat with industry experts
Register now.

A group of people in a building

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Dow’s 125-year legacy: Innovating with AI to secure a long future appeared first on Microsoft Security Blog.

]]>
Protecting customers from Octo Tempest attacks across multiple industries http://approjects.co.za/?big=en-us/security/blog/2025/07/16/protecting-customers-from-octo-tempest-attacks-across-multiple-industries/ Wed, 16 Jul 2025 16:00:00 +0000 To help protect and inform customers, Microsoft highlights protection coverage across the Microsoft Defender security ecosystem to protect against threat actors like Octo Tempest.

The post Protecting customers from Octo Tempest attacks across multiple industries appeared first on Microsoft Security Blog.

]]>
In recent weeks, Microsoft has observed Octo Tempest, also known as Scattered Spider, impacting the airlines sector, following previous activity impacting retail, food services, hospitality organizations, and insurance between April and July 2025. This aligns with Octo Tempest’s typical patterns of concentrating on one industry for several weeks or months before moving on to new targets. Microsoft Security products continue to update protection coverage as these shifts occur. 

To help protect and inform customers, this blog highlights the protection coverage across the Microsoft Defender and Microsoft Sentinel security ecosystem and provides security posture hardening recommendations to protect against threat actors like Octo Tempest.

Overview of Octo Tempest 

Octo Tempest, also known in the industry as Scattered Spider, Muddled Libra, UNC3944, or 0ktapus, is a financially motivated cybercriminal group that has been observed impacting organizations using varying methods in their end-to-end attacks. Their approach includes: 

  • Gaining initial access using social engineering attacks and impersonating a user and contacting service desk support through phone calls, emails, and messages.
  • Short Message Service (SMS)-based phishing using adversary-in-the-middle (AiTM) domains that mimic legitimate organizations.
  • Using tools such as ngrok, Chisel, and AADInternals.
  • Impacting hybrid identity infrastructures and exfiltrating data to support extortion or ransomware operations.  

Recent activity shows Octo Tempest has deployed DragonForce ransomware with a particular focus on VMWare ESX hypervisor environments. In contrast to previous patterns where Octo Tempest used cloud identity privileges for on-premises access, recent activities have involved impacting both on-premises accounts and infrastructure at the initial stage of an intrusion before transitioning to cloud access. 

Octo Tempest detection coverage 

Microsoft Defender has a wide range of detections to detect Octo Tempest related activities and more. These detections span across all areas of the security portfolio including endpoints, identities, software as a service (SaaS) apps, email and collaboration tools, cloud workloads, and more to provide comprehensive protection coverage. Shown below is a list of known Octo Tempest tactics, techniques, and procedures (TTPs) observed in recent attack chains mapped to detection coverage.

Tactic Technique Microsoft Protection Coverage (non-exhaustive) 
Initial Access Initiating password reset on target’s credentials Unusual user password reset in your virtual machine; (MDC) 
Discovery Continuing environmental reconnaissance Suspicious credential dump from NTDS.dit; (MDE)
Account enumeration reconnaissance; (MDI)
Network-mapping reconnaissance (DNS); (MDI)
User and IP address reconnaissance (SMB); (MDI)
User and Group membership reconnaissance (SAMR); (MDI)
Active Directory attributes reconnaissance (LDAP); (MDI) 
Credential Access,  Lateral Movement Identifying Tier-0 assets Mimikatz credential theft tool; (MDE)
ADExplorer collecting Active Directory information; (MDE)
Security principal reconnaissance (LDAP); (MDI)
Suspicious Azure role assignment detected; (MDC)
Suspicious elevate access operation; (MDC)
Suspicious domain added to Microsoft Entra ID; (MDA)
Suspicious domain trust modification following risky sign-in; (MDA) 
Collecting additional credentials Suspected DCSync attack (replication of directory services); (MDI)
Suspected AD FS DKM key read; (MDI) 
Accessing enterprise environments with VPN and deploying VMs with tools to maintain access in compromised environments ‘Ngrok’ hacktool was prevented; (MDE)
‘Chisel’ hacktool was prevented; (MDE)
Possibly malicious use of proxy or tunneling tool; (MDE)
Possible Octo Tempest-related device registered (MDA) 
Defense Evasion, Persistence Leveraging EDR and management tooling Tampering activity typical to ransomware attacks; (MDE) 
Persistence, Execution Installing a trusted backdoor ADFS persistent backdoor; (MDE) 
Actions on Objectives Staging and exfiltrating stolen data Possible exfiltration of archived data; (MDE)
Data exfiltration over SMB; (MDI) 
Deploying ransomware ‘DragonForce’ ransomware was prevented; (MDE)
Possible hands-on-keyboard pre-ransom activity; (MDE) 
Note: The list is not exhaustive. A full list of available detections can be found in the Microsoft Defender portal. 

Disrupting Octo Tempest attacks  

Disrupt in-progress attacks with automatic attack disruption:
Attack disruption is Microsoft Defender’s unique, built-in self-defense capability that consumes multi-domain signals, the latest threat intelligence, and AI-powered machine learning models to automatically predict and disrupt an attacker’s next move by containing the compromised asset (user, device). This technology uses multiple potential indicators and behaviors, including all the detections listed above, possible Microsoft Entra ID sign-in attempts, possible Octo Tempest-related sign-in activities and correlate them across the Microsoft Defender workloads into a high-fidelity incident. 

Based on previous learnings from popular Octo Tempest techniques, attack disruption will automatically disable the user account used by Octo Tempest and revokes all existing active sessions by the compromised user. 

While attack disruption can contain the attack by cutting off the attacker, it is critical for security operations center (SOC) teams to conduct incident response activities and post-incident analysis to help ensure the threat is fully contained and remediated.  

Investigate and hunt for Octo Tempest related activity:
Octo Tempest is infamously known for aggressive social engineering tactics, often impacting individuals with specific permissions to gain legitimate access and move laterally through networks. To help organizations identify these activities, customers can use Microsoft Defender’s advanced hunting capability to proactively investigate and respond to threats across their environment. Analysts can query across both first- and third-party data sources powered by Microsoft Defender XDR and Microsoft Sentinel. In addition to these tables, analysts can also use exposure insights from Microsoft Security Exposure Management.  

Using advanced hunting and the Exposure Graph, defenders can proactively assess and hunt for the threat actor’s related activity and identify which users are most likely to be targeted and what will be the effect of a compromise, strengthening defenses before an attack occurs.  

Proactive defense against Octo Tempest 

Microsoft Security Exposure Management, available in the Microsoft Defender portal, equips security teams with capabilities such as critical asset protection, threat actor initiatives, and attack path analysis that enable security teams to proactively reduce exposure and mitigate the impact of Octo Tempest’s hybrid attack tactics.

Ensure critical assets stay protected 

Customers should ensure critical assets are classified as critical in the Microsoft Defender portal to generate relevant attack paths and recommendations in initiatives. Microsoft Defender automatically identifies critical devices in your environment, but teams should also create custom rules and expand critical asset identifiers to enhance protection.  

Take action to minimize impact with initiatives 

Exposure Management’s initiatives feature provides goal-driven programs that unify key insights to help teams harden defenses and act fast on real threats. To address the most pressing risks related to Octo Tempest, we recommend organizations begin with the initiatives below: 

  • Octo Tempest Threat Initiative: Octo Tempest is known for tactics like extracting credentials from Local Security Authority Subsystem Service (LSASS) using tools like Mimikatz and signing in from attacker-controlled IPs—both of which can be mitigated through controls like attack surface reduction (ASR) rules and sign-in policies. This initiative brings these mitigations together into a focused program, mapping real-world attacker behaviors to actionable controls that help reduce exposure and disrupt attack paths before they escalate.
  • Ransomware Initiative: A broader initiative focused on reducing exposure to extortion-driven attacks through hardening identity, endpoint, and infrastructure layers. This will provide recommendations tailored for your organization.  

Investigate on-premises and hybrid attack paths

Security teams can use attack path analysis to trace cross-domain threats—like those used by Octo Tempest—who’ve exploited the critical Entra Connect server to pivot into cloud workloads, escalate privileges, and expand their reach. Teams can use the ‘Chokepoint’ view in the attack path dashboard to highlight entities appearing in multiple paths, making it easy to filter for helpdesk-linked accounts, a known Octo target, and prioritize their remediation.  

Given Octo Tempest’s hybrid attack strategy, a representative attack path may look like this: 

Recommendations 

In today’s threat landscape, proactive security is essential. By following security best practices, you reduce the attack surface and limit the potential impact of adversaries like Octo Tempest. Microsoft recommends implementing the following to help strengthen your overall posture and stay ahead of threats: 

Identity security recommendations 

Endpoint security recommendations 

  • Enable Microsoft Defender Antivirus cloud-delivered protection for Linux.
  • Turn on Microsoft Defender Antivirus real-time protection for Linux.
  • Enable Microsoft Defender for Endpoint EDR in block mode to block post breach malicious behavior on the device through behavior blocking and containment capabilities.
  • Turn on tamper protection that essentially prevents Microsoft Defender for Endpoint (your security settings) from being modified.
  • Block credential stealing from the Windows local security authority subsystem: Attack surface reduction (ASR) rules are the most effective method for blocking the most common attack techniques being used in cyber-attacks and malicious software.
  • Turn on Microsoft Defender Credential Guard to isolate secrets so that only privileged system software can access them.

Cloud security recommendations 

  • Key Vaults should have purge protection enabled to prevent immediate, irreversible deletion of vaults and secrets.
  • To reduce risks of overly permissive inbound rules on virtual machines’ management ports, enable just-in-time (JIT) network access control. 
  • Microsoft Defender for Cloud recommends encrypting data with customer-managed keys (CMK) to support strict compliance or regulatory requirements. To reduce risk and increase control, enable CMK to manage your own encryption keys through Microsoft Azure Key Vault.
  • Enable logs in Azure Key Vault and retain them for up to a year. This enables you to recreate activity trails for investigation purposes when a security incident occurs or your network is compromised.
  • Microsoft Azure Backup should be enabled for virtual machines to protect the data on your Microsoft Azure virtual machines, and to create recovery points that are stored in geo-redundant recovery vaults.

Microsoft Defender

Comprehensive threat prevention, detection and response capabilities for everyone.

A group of people sitting at computers

Explore security solutions

​​To learn more about Microsoft Security solutions, visit our website. Bookmark the Microsoft Security blog to keep up with our expert coverage on security matters.

Also, follow us on Microsoft Security LinkedIn and @MSFTSecurity on X for the latest news and updates on cybersecurity. 

The post Protecting customers from Octo Tempest attacks across multiple industries appeared first on Microsoft Security Blog.

]]>