Digital Security Best practices | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/content-type/best-practices/ Expert coverage of cybersecurity topics Tue, 01 Apr 2025 21:21:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Transforming public sector security operations in the AI era http://approjects.co.za/?big=en-us/security/blog/2025/04/01/transforming-public-sector-security-operations-in-the-ai-era/ Tue, 01 Apr 2025 16:00:00 +0000 Read how Microsoft’s unified security operations platform can use generative AI to transform cybersecurity for the public sector.

The post Transforming public sector security operations in the AI era appeared first on Microsoft Security Blog.

]]>
The cyberthreat landscape is evolving at an unprecedented pace, becoming increasingly dangerous and complex. Nation-state threat actors and cybercriminals are employing advanced tactics and generative AI to execute highly sophisticated attacks. This situation is further compounded by outdated technology and systems, shortage of cybersecurity talent, and antiquated processes, which are inefficient in handling the scale, complexity, and ever-evolving nature of these cyberattacks. With 62% of all cyberattacks targeting public sector organizations, it is crucial for these sectors to leverage state-of-the-art technology, powered by generative AI, to transform their cyber defense and stay ahead of these evolving threats.1

Microsoft’s Unified Security Operations for Public Sector

Discover how Microsoft helps public sectors modernize security operations to enhance cyber defense and streamline processes.

Computer programmer working at night in office.

Microsoft’s unified security operations for public sector

Embracing modern security technology, processes, and continuous skill development is vital for protecting public sector organizations. By leveraging innovations powered by generative AI, unparalleled threat intelligence, and best practices, public sectors can transform their security operations to effectively defend against emerging cyberthreats.

AI-powered security operations: Microsoft delivers innovations to effectively protect against today’s complex threat landscape. The AI-powered unified security operations platform offers an enhanced and streamlined approach to security operations by integrating security information and event management (SIEM), security orchestration, automation, and response (SOAR), extended detection and response (XDR), posture and exposure management, cloud security, threat intelligence, and AI into a single, cohesive experience, eliminating silos and providing end-to-end security operations (SecOps). The unified platform boosts analyst efficiency, reduces context switching, and delivers quicker time to value with less integration work.

Microsoft is committed to helping public sector customers accelerate threat detection and response through improved security posture across organizations with richer insights, multi-tenant management, early warnings, and increased efficiency through automation and generative AI. Through automatic attack disruption, Microsoft Defender XDR utilizes robust threat intelligence, advanced AI and machine learning to detect and contain sophisticated cyberattacks in real time, significantly reducing their impact. This high-fidelity detection and protection capability disrupts more than 40,000 incidents each month, like identity threats and human-operated cyberattacks, while maintaining a false positive rate below 1%.

“Speed is an important factor against adversaries, and gaining situational awareness across a complex landscape of threats is therefore key.”

—Customer in the healthcare industry

People and process modernization: Public-private partnerships play a vital role in fostering the exchange of best practices and developing standardized processes that drive efficiency in incident response and threat intelligence sharing. For example, adapting the threat triage process to leverage generative AI agents can enable teams to scale significantly with agents autonomously analyzing and triaging vast volumes of alerts in real time, prioritize critical cyberthreats, and recommend specific remediation steps based on historical patterns. These collaborations also empower organizations to build teams equipped with cutting-edge skills and a comprehensive understanding of generative AI capabilities, helping them stay ahead of emerging cyberthreats.

Collective cyber defense and threat intelligence: Using Microsoft’s global threat intelligence insights, public sector organizations can collaborate with each other and across other sectors to share deeper cyberthreat insights efficiently. This partnership enables public sector organizations to exchange threat intelligence in a standardized manner within a region or country.

“Collective defense collaborations are driven by mutual interests with industry peers and cybersecurity alliances on improving security postures and responding more effectively to emerging threats.”

—Customer in the transport industry

The power of generative AI in cyber operations

Generative AI brings several transformative benefits to cybersecurity, making it a cornerstone for public sector security operations center (SOC) modernization.

Enhanced threat detection and response: Generative AI has the potential to sift through data from firewalls, endpoints, and cloud workloads, surfacing actionable cyberthreats that might go unnoticed in manual reviews. Unlike traditional rule-based detection methods, generative AI can identify attack patterns, adapt to emerging cyberthreats, and prioritize incidents based on risk severity, helping security teams focus on the most critical issues. Generative AI can go beyond simply surfacing cyberthreats; it can contextualize attack signals, predict potential breaches, and recommend guided responses for remediation strategies, reducing the burden on security analysts. Microsoft Security Copilot is already covering a range of use cases and is expanding rapidly to seize the full potential of generative AI. By providing guided incident investigation and response, Security Copilot helps security operations center (SOC) teams to detect and respond to cyberthreats more effectively. It can help teams to learn about malicious actors and campaigns, provide rapid summaries, and even contact the user to check for suspicious behavior. Adoption is associated with 30% reduction in security incident mean time to resolution (MTTR).2

Reduced operational overheads: By automating routine tasks, generative AI can free analysts from repetitive processes like alert triage or patch validation, enabling them to focus on advanced threat hunting. Security teams can already leverage Security Copilot to translate complex scripts into natural language, highlighting and explaining key parts to enhance team skills and reduce investigation time for advanced investigations as much as by 85%, helping security teams operate at scale.3

“Increased support from AI is critical given the significant capacity challenge in the public sector: a shortage of talent, an influx of threats, and an ever-increasing volume of data, assets, and organizations.”

—National SOC customer

Building a resilient digital future together

As nation-state threat actors and cybercriminals increasingly employ generative AI in their cyberattacks, public sector organizations can no longer rely on fragmented, manual defenses. The path forward lies in public-private collaboration, centered on co-designing and innovating solutions tailored to the public sector’s unique needs.

By adopting Microsoft Security solutions, public sector organizations can leverage combined resources, expertise, and cutting-edge technology to fortify critical infrastructure, safeguard citizen data, and strengthen public trust.

Now is the time to act: Modernize your cyber defense in the AI era to collectively forge a more secure and resilient digital future for government and public sector operations.

Learn more

Learn more about the AI-Powered Security Operations Platform for more details on the unified Security Operations platform.

Learn more about Microsoft Sentinel.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2024

2Generative AI and Security Operations Center Productivity: Evidence from Live Operations, Microsoft study. James Bono, Alec Xu, Justin Grana. November 24, 2024.

3Forrester Total Economic Impact™ of Microsoft Sentinel. The Total Economic Impact(TM) Of Microsoft Sentinel, a commissioned study conducted by Forrester Consulting, March 2024. Results are based on a composite organization representative of interviewed customers.

The post Transforming public sector security operations in the AI era appeared first on Microsoft Security Blog.

]]>
US Department of Labor’s journey to Zero Trust security with Microsoft Entra ID http://approjects.co.za/?big=en-us/security/blog/2025/03/27/us-department-of-labors-journey-to-zero-trust-security-with-microsoft-entra-id/ Thu, 27 Mar 2025 16:00:00 +0000 Discover how the US Department of Labor enhanced security and modernized authentication with Microsoft Entra ID and phishing-resistant authentication.

The post US Department of Labor’s journey to Zero Trust security with Microsoft Entra ID appeared first on Microsoft Security Blog.

]]>
For several years, Microsoft has been helping United States federal and state government groups, including military departments and civilian agencies, transition to a Zero Trust security model. Advanced features in Microsoft Entra ID have helped these organizations meet requirements to employ centralized identity management systems, to use phishing-resistant multifactor authentication, and to consider device-level signals for authorizing access to resources.

The US Department of Labor (DOL) has been on a journey to consolidate their identity systems and modernize authentication to applications. In this blog post, I’ll describe the benefits they’re gaining from supplementing personal identity verification (PIV) cards with device-bound passkeys implemented through the Microsoft Authenticator app and from adding risk signals to Microsoft Entra Conditional Access policies.

To review how Microsoft Entra ID can help your department or agency meet federal cybersecurity requirements, while reducing complexity and improving the user experience, visit Microsoft Entra ID: Enhancing identity security for US agencies.

Adopting Microsoft Entra ID as a centralized identity system

Like many organizations, DOL first used Entra ID (then called Azure Active Directory) when they adopted Microsoft 365. At that time, they were maintaining multiple identity technologies, including on-premises Active Directory, Active Directory Federation Services, and Ping Federate. This fragmented strategy required users to authenticate to different applications using different identity systems.

With the help of their Identity, Credential, and Access Management (ICAM) group, DOL worked to consolidate all their identity systems to Entra ID. They chose Entra ID because it supports the necessary protocols (such as SAML and OIDC) to deliver a single sign-on (SSO) experience for most of their applications. This effort, which took about a year, included reaching out to application owners and encouraging them to move their applications off of Kerberos, ideally by adopting MSAL (Microsoft Authentication Library), so their applications could easily integrate with Entra ID.

Integrating applications with Entra ID makes it possible to strengthen security by applying Conditional Access policies to them. DOL at first applied simple Conditional Access policies that only allowed access to applications from hybrid-joined Government Furnished Equipment (GFE devices). The COVID-19 pandemic accelerated their adoption of additional features, such as enforcing device compliance through Microsoft Intune and reporting device risk to other security services through integration with Microsoft Defender for Endpoint. Policies could then make access decisions based on device risk, such as only granting access to applications from devices with “low risk” or “no risk.”

For an introduction to Microsoft Entra Conditional Access, visit our documentation.

Upleveling static Conditional Access policies to risk-based Conditional Access policies

In 2022, when new regulations required government agencies to apply more stringent cybersecurity standards to protect against sophisticated online attacks, DOL decided to strengthen their Zero Trust implementation with phishing-resistant authentication and dynamic risk-based Conditional Access policies. Both would help them enforce the Zero Trust principle of least privilege access.

Microsoft Entra ID Protection capabilities made it possible for Conditional Access policies to assess sign-in risk and user risk, in addition to device risk, before granting access. Policies would tolerate different levels of user risk depending on whether the user signs in as a ‘privileged user’ or as a ‘regular user.’ Access for users deemed high-risk would always be blocked. Privileged users with low or medium risk would also be blocked. Regular users with low risk would have to reauthenticate within a set period of time, while users with medium risk would have to reauthenticate more frequently.

Two graphics listing the different types of risk detections in Microsoft Entra ID protection.

For more in-depth information on risk-based Conditional Access policies, visit our documentation.

Adding a layer of security for privileged users

A subset of DOL employees may operate as a ‘privileged user’ for some tasks and as a ‘regular user’ for others. To access less sensitive applications such as Microsoft 365, these employees sign in as a ‘regular user’ using a government-issued PIV card or Windows Hello for Business from their GFE device. To access highly sensitive applications and resources, or to execute sensitive tasks, they must sign in using a separate account that has privileged access rights.

Previously, the DOL assigned usernames, passwords, and basic multifactor authentication to privileged accounts, but this still left some risk of credential theft from phishing attacks. Since the most important accounts to secure are those with administrative rights, DOL chose to make privileged accounts more secure with phishing-resistant authentication, specifically, with device-bound passkeys in the Microsoft Authenticator app. This is faster and less expensive to support than issuing employees users a second PIV card and a second GFE device.

Privileged users only need to install the Microsoft Authenticator app on their government-issued cell phone. They don’t have to visit a special portal to provision and onboard their passkey. They simply sign in for the first time on their mobile phone using a Temporary Access Pass and set up their passkey in one fast, frictionless workflow. As an added benefit, passkeys also reduce the time to authenticate to DOL applications. According to Microsoft testing, signing in with a passkey is eight times faster than using a password and traditional multifactor authentication.1

After DOL finishes deploying passkeys for their privileged users, they plan to roll out passkeys to the rest of their workforce as a secondary authentication method that complements other passwordless methods such as Windows Hello for Business and certificate-based authentication (CBA).

To explore phishing-resistant authentication methods available with Microsoft Entra, explore the video series Phishing-resistant authentication in Microsoft Entra ID.

Using “report-only” mode in Conditional Access as a modeling tool

Every organization that modernizes their identity strategy and authentication methods, as DOL did, strengthens security, improves flexibility, and reduces costs. Using a modern, deeply integrated security toolset will also provide valuable new insights. For example, you can use Conditional Access as a modeling and planning tool. By running policies in report-only mode, you can better understand your environment, investigate user behavior to uncover risk scenarios not visible to the human eye, and model solutions for those scenarios. This helps you decide which controls to apply to close any security gaps you discover.

DOL rolled out risk-based Conditional Access policies, in report-only mode, that enforce the use of passkeys by privileged users. In the activity reports, they observed employees signing in with their privileged accounts, then visiting portals that they should access as regular users, not as admins. DOL then adjusted their policies to block such behavior.

Running risk-based policies in report-only mode exposed behavior that DOL could then use policies to control. It also helped them to uncover inconsistencies and redundancies that reflected unaddressed technical debt; for example, policies that collided. Their goal is to consolidate and simplify their static policies into fewer, more comprehensive risk-based policies that block dangerous or unauthorized behavior while allowing employees to sign in faster and more securely to get their work done.

To learn more about Conditional Access report-only mode, visit our documentation.

Looking ahead

So far, DOL has integrated more than 200 applications with Entra ID for SSO. The team is still in the monitoring phase as they work to consolidate Conditional Access policies and ensure compliance with security requirements, such as the use of passkeys for accessing high-value assets. Not only are they reducing the number of policies they must maintain, but their logs are also cleaner, and it’s easier to find insights.

DOL’s future plans include implementing attestation, which will ensure that employees use a genuine version of the Authenticator app published by Microsoft before registering a passkey. They’re also investigating joining devices to Entra ID so they can centrally manage them from the cloud for easier deployment of updates, policies, and applications. This will also allow them to use policy to enforce enrollment in Windows Hello for Business, further advancing their transition to phishing-resistant authentication.

Learn more

Learn more about Microsoft Entra ID.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Convincing a billion users to love passkeys: UX design insights from Microsoft to boost adoption and security, Sangeeta Ranjit and Scott Bingham. December 12, 2024.

The post US Department of Labor’s journey to Zero Trust security with Microsoft Entra ID appeared first on Microsoft Security Blog.

]]>
How MSRC coordinates vulnerability research and disclosure while building community http://approjects.co.za/?big=en-us/security/blog/2025/03/13/how-msrc-coordinates-vulnerability-research-and-disclosure-while-building-community/ Thu, 13 Mar 2025 16:00:00 +0000 Learn about the Microsoft Security Response Center, which investigates vulnerabilities and releases security updates to help protect customers from cyberthreats.

The post How MSRC coordinates vulnerability research and disclosure while building community appeared first on Microsoft Security Blog.

]]>
In an era where discovering and rapidly mitigating security vulnerabilities is more important than ever before, the Microsoft Security Response Center (MSRC) is at the center of this work. MSRC focuses on investigating vulnerabilities, coordinating their disclosure, and releasing security updates to help protect customers and Microsoft from current and emerging cyberthreats related to security and privacy. MSRC partners with product teams across Microsoft—as well as external security researchers—to investigate reports of security vulnerabilities affecting Microsoft products and services.

MSRC also fosters the development of a stronger and more effective security researcher community through a variety of initiatives, including the Microsoft bug bounty program, the BlueHat security conference, the MSRC blog, and internal security training for engineers.

Microsoft uses a Coordinated Vulnerability Disclosure (CVD) process that recognizes security researchers while disclosing vulnerabilities in a responsible and timely manner.

Collaboration through bug bounty programs and researcher recognition

Security researchers are incentivized to find vulnerabilities and report them through a Coordinated Vulnerability Disclosure (CVD) process. Some reported vulnerabilities are eligible for rewards as part of Microsoft’s bug bounty programs. These programs are an important part of our proactive strategy of incentivizing the external security research community to partner with us and help protect our customers from security threats. Since its inception in 2013, Microsoft’s bug bounty programs have awarded more than $60 million in bounties to security researchers.

In 2024, we announced expansions to several existing bounty programs, and launched a new Defender Bounty Program and AI Bounty Program. We also expanded our bug bounty programs with Microsoft Zero Day Quest, which adds $4 million in potential bug bounty rewards for research into high-impact areas, specifically cloud and AI. Security researchers who report a vulnerability that isn’t eligible for a bug bounty can still take part in the Microsoft Researcher Recognition Program and be recognized for their work on the Researcher Leaderboard.

Coordinated Vulnerability Disclosure (CVD)

Microsoft follows the CVD principle when partnering with external security researchers to respond and mitigate vulnerabilities in our products and services. This approach gives researchers recognition for their work—and provides Microsoft an opportunity to address newly reported vulnerabilities before bad actors can exploit them.

To better protect our products and services, MSRC partners with Microsoft engineering teams to build proactive mitigations using the information provided by both internal and external security researchers. This can significantly reduce or eliminate classes of vulnerabilities.

Many of the cloud service vulnerabilities are fixed by Microsoft on our servers and don’t require customers to take action to stay secure, but for purposes of transparency we now disclose all critical cloud common vulnerabilities and exposures (CVEs). In cases where Microsoft customers need to act, Microsoft provides customers with clear and timely security guidance.

To help customers accelerate their security response and remediation, Microsoft recently expanded our CVD strategy to include machine-readable Common Security Advisory Framework (CSAF) files that complement our existing CVD data sharing channels. With CSAF files, Microsoft customers now have machine-readable information on known vulnerabilities. This capability is part of our comprehensive strategy for vulnerability disclosure, which includes our Security Updates API and the human-readable vulnerability disclosures provided in the MSRC Security Update Guide.

Microsoft Active Protections Program (MAPP)

The Microsoft Active Protections Program (MAPP) gives security technology providers early access to vulnerability information so that they can more rapidly provide updated protections to their customers. More than 100 MAPP partners receive security vulnerability information from the MSRC in advance of Microsoft’s monthly security update release. Partners use this information to provide protections through their security software or devices, such as antivirus software, network-based intrusion detection systems, or host-based intrusion prevention systems.

To learn about the MAPP program, including which types of organizations are eligible to join MAPP, what is required of member organizations, and MAPP program tiers, read the MAPP Frequently Asked Questions.

Release of security updates

Microsoft-managed backend services require no additional customer action to stay secure. In cases where customers must take action to stay secure, we release security updates.

After a vulnerability that requires customers to take action has been fixed in our products, MSRC provides updates. MSRC releases security updates for most Microsoft products on the second Tuesday of each month at 10:00 AM PT and recommends that IT administrators and other customers plan their deployment schedules accordingly.

Cybersecurity education through content and conferences

A key component of MSRC’s work is to provide educational content for the security community. MSRC shares important public updates on vulnerabilities and more on the MSRC blog (you can also subscribe through the MSRC RSS feed). The latest information about security-related deployments, known vulnerabilities, and advisories can be found on the Security Update Guide.

MSRC also works to build a stronger security researcher community by hosting the BlueHat security conference. BlueHat brings together leading researchers and security practitioners, providing a platform to share knowledge and best practices around security. If you missed the latest conference, you can view on-demand presentations from past conferences or listen to the BlueHat Podcast (subscribe here).

Learn more about the Microsoft Security Response Center

To learn more about MSRC, visit us at msrc.microsoft.com. There, you can find detailed information on our programs and access educational resources. You can also learn more about MSRC and Microsoft’s related security initiatives through the following resources:

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post How MSRC coordinates vulnerability research and disclosure while building community appeared first on Microsoft Security Blog.

]]>
Women’s History Month: Why different perspectives in cybersecurity and AI matter more than ever before http://approjects.co.za/?big=en-us/security/blog/2025/03/06/womens-history-month-why-different-perspectives-in-cybersecurity-and-ai-matter-more-than-ever-before/ Thu, 06 Mar 2025 21:00:00 +0000 This Women’s History Month serves as a crucial moment for us to lead and continue to pave the way for a more inclusive future. I am truly honored to support my amazing women colleagues who continue to excel in their careers. Their diverse perspectives and talents are invaluable, driving innovation and progress across various industries. I am proud to be a part of Microsoft Security, which is focused on building and nurturing an inclusive cybersecurity workforce and curating careers, tools, and resources that work for everyone. We recognize that this is what promotes business growth, strengthens global defenses, and enhances AI safety.

The post Women’s History Month: Why different perspectives in cybersecurity and AI matter more than ever before appeared first on Microsoft Security Blog.

]]>
This Women’s History Month serves as a crucial moment for us to lead and continue to pave the way for a more inclusive future. I am truly honored to support my amazing women colleagues who continue to excel in their careers and am grateful to have so many allies who have extended their hands to help guide and shape me to the person I am today.  

Just last week I was in Tokyo for the Japan Security Forum, where Miki Tsusaka, the President of Microsoft Japan and I had a great conversation during a CyberWomen Asia fireside chat about the importance of women in cybersecurity. Following the chat was a panel discussion with Tsutaki-san, Security leader at Yamaha Motor Corporation and Debbie Furtado, one of our bright Principal group engineering managers. The event highlighted our different perspectives and talents which are invaluable to drive innovation and progress across various industries. I am proud to be a part of Microsoft Security, which is focused on building and nurturing an inclusive cybersecurity workforce and curating careers, tools, and resources that work for everyone. We recognize that this promotes business growth, strengthens global defenses, and enhances AI safety. 

According to the World Economic Forum, gender equality in entrepreneurship drives economic growth and innovation.1 McKinsey and Company has also observed that closing the gender gap in employment and entrepreneurship could increase global GDP by 20%, and that organizations with 30% or more women on executive teams are 27% more likely to achieve higher profitability.2  

For a better future we need everyone in the journey and this is particularly of significance in cybersecurity where we face a critical shortage of talent and where cyberthreat actors are from diverse backgrounds.  

Cybersecurity Awareness

Empower everyone to be a cyber defender with resources and training curated by the security experts at Microsoft.

Photo of a developer coding her workspace in an enterprise office, using Visual Studio on a multi-monitor set up.

Addressing the skills gap in cybersecurity and AI

There is a significant talent gap in cybersecurity. The 2024 ISC2 Cybersecurity Workforce Study reports a global shortage of 4.7 million skilled workers.3 This worker shortage has been a significant challenge the past 12 months and is expected to continue for the next two years. To address this growing concern, we must embrace a wide range of perspectives and backgrounds to foster innovation and find more effective solutions to these challenges.   

By incorporating individuals with varied perspectives, experiences, and approaches within the cybersecurity workforce, we can enhance problem-solving capabilities and enhance strategic defenses.   

Cybercriminals come from various cultures and backgrounds, bringing different perspectives. Security professionals with varied backgrounds and perspectives can provide creative approaches and unique insights to counter these cyberthreats.  Likewise, for AI, having different backgrounds and perspectives help with AI safety and biases. 

Continue to deepen expertise and invite different perspectives

While progress has been made in creating opportunities for women in cybersecurity, significant work remains to remove entry barriers. It is essential to continue our efforts to improve representation in cybersecurity by creating new pathways and gaining support from more allies. I wholeheartedly encourage you to actively contribute to this objective through the many organizations and programs available and by doing the following: 

  • Share the accomplishments of meaningful role models with a wide range of experiences and perspectives. 
  • Adjust job requirements to remove potential biases. 
  • Offer inclusive training that encourages professionals, particularly those in their early careers, and encourage them to advance their skills in cybersecurity. 
  • Volunteer for educational programs that include cybersecurity and AI training. 
  • Reach out to community groups that advocate for mentorship opportunities. 
  • Act as an ally and create opportunities for those interested in cybersecurity careers, such as by encouraging them to participate and speak up and introducing them to peers. 

Security should be for all and we are all in this together. Together, we can enhance the global security workforce and contribute to a promising future.  

Register for our upcoming panel “Harnessing Diversity – Strengthening the Cybersecurity Workforce in the Age of AI” and visit Microsoft’s cybersecurity awareness page for resources and training provided by Microsoft security experts, enabling everyone in your organization to become a cyber defender. Let us all acknowledge the importance of diversity in cybersecurity and its critical role in safeguarding our future and shaping a history we can be proud of. 

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Advancing gender parity in entrepreneurship: strategies for a more equitable future, World Economic Forum. January 20, 2025.

2Diversity matters even more: The case for holistic impact, McKinsey and Company. December 5, 2023.

32024 ISC2 Cybersecurity Workforce Study, ISC2. October 31, 2024.

The post Women’s History Month: Why different perspectives in cybersecurity and AI matter more than ever before appeared first on Microsoft Security Blog.

]]>
Securing generative AI models on Azure AI Foundry http://approjects.co.za/?big=en-us/security/blog/2025/03/04/securing-generative-ai-models-on-azure-ai-foundry/ Tue, 04 Mar 2025 18:00:00 +0000 Discover how Microsoft secures AI models on Azure AI Foundry, ensuring robust security and trustworthy deployments for your AI systems.

The post Securing generative AI models on Azure AI Foundry appeared first on Microsoft Security Blog.

]]>
New generative AI models with a broad range of capabilities are emerging every week. In this world of rapid innovation, when choosing the models to integrate into your AI system, it is crucial to make a thoughtful risk assessment that ensures a balance between leveraging new advancements and maintaining robust security. At Microsoft, we are focusing on making our AI development platform a secure and trustworthy place where you can explore and innovate with confidence. 

Here we’ll talk about one key part of that: how we secure the models and the runtime environment itself. How do we protect against a bad model compromising your AI system, your larger cloud estate, or even Microsoft’s own infrastructure?  

How Microsoft protects data and software in AI systems

But before we set off on that, let me set to rest one very common misconception about how data is used in AI systems. Microsoft does not use customer data to train shared models, nor does it share your logs or content with model providers. Our AI products and platforms are part of our standard product offerings, subject to the same terms and trust boundaries you’ve come to expect from Microsoft, and your model inputs and outputs are considered customer content and handled with the same protection as your documents and email messages. Our AI platform offerings (Azure AI Foundry and Azure OpenAI Service) are 100% hosted by Microsoft on its own servers, with no runtime connections to the model providers. We do offer some features, such as model fine-tuning, that allow you to use your data to create better models for your own use—but these are your models that stay in your tenant. 

So, turning to model security: the first thing to remember is that models are just software, running in Azure Virtual Machines (VM) and accessed through an API; they don’t have any magic powers to break out of that VM, any more than any other software you might run in a VM. Azure is already quite defended against software running in a VM attempting to attack Microsoft’s infrastructure—bad actors try to do that every day, not needing AI for it, and AI Foundry inherits all of those protections. This is a “zero-trust” architecture: Azure services do not assume that things running on Azure are safe! 

What is Zero Trust?

Learn more

Now, it is possible to conceal malware inside an AI model. This could pose a danger to you in the same way that malware in any other open- or closed-source software might. To mitigate this risk, for our highest-visibility models we scan and test them before release: 

  • Malware analysis: Scans AI models for embedded malicious code that could serve as an infection vector and launchpad for malware. 
  • Vulnerability assessment: Scans for common vulnerabilities and exposures (CVEs) and zero-day vulnerabilities targeting AI models. 
  • Backdoor detection: Scans model functionality for evidence of supply chain attacks and backdoors such as arbitrary code execution and network calls. 
  • Model integrity: Analyzes an AI model’s layers, components, and tensors to detect tampering or corruption. 

You can identify which models have been scanned by the indication on their model card—no customer action is required to get this benefit. For especially high-visibility models like DeepSeek R1, we go even further and have teams of experts tear apart the software—examining its source code, having red teams probe the system adversarially, and so on—to search for any potential issues before releasing the model. This higher level of scanning doesn’t (yet) have an explicit indicator in the model card, but given its public visibility we wanted to get the scanning done before we had the UI elements ready. 

Defending and governing AI models

Of course, as security professionals you presumably realize that no scans can detect all malicious action. This is the same problem an organization faces with any other third-party software, and organizations should address it in the usual manner: trust in that software should come in part from trusted intermediaries like Microsoft, but above all should be rooted in an organization’s own trust (or lack thereof) for its provider.  

For those wanting a more secure experience, once you’ve chosen and deployed a model, you can use the full suite of Microsoft’s security products to defend and govern it. You can read more about how to do that here: Securing DeepSeek and other AI systems with Microsoft Security.

And of course, as the quality and behavior of each model is different, you should evaluate any model not just for security, but for whether it fits your specific use case, by testing it as part of your complete system. This is part of the wider approach to how to secure AI systems which we’ll come back to, in depth, in an upcoming blog. 

Using Microsoft Security to secure AI models and customer data

In summary, the key points of our approach to securing models on Azure AI Foundry are: 

  1. Microsoft carries out a variety of security investigations for key AI models before hosting them in the Azure AI Foundry Model Catalogue, and continues to monitor for changes that may impact the trustworthiness of each model for our customers. You can use the information on the model card, as well as your trust (or lack thereof) in any given model builder, to assess your position towards any model the way you would for any third-party software library. 
  1. All models hosted on Azure are isolated within the customer tenant boundary. There is no access to or from the model provider, including close partners like OpenAI. 
  1. Customer data is not used to train models, nor is it made available outside of the Azure tenant (unless the customer designs their system to do so). 

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Securing generative AI models on Azure AI Foundry appeared first on Microsoft Security Blog.

]]>
Securing DeepSeek and other AI systems with Microsoft Security http://approjects.co.za/?big=en-us/security/blog/2025/02/13/securing-deepseek-and-other-ai-systems-with-microsoft-security/ Thu, 13 Feb 2025 17:00:00 +0000 Microsoft Security provides cyberthreat protection, posture management, data security, compliance and governance, and AI safety, to secure AI applications that you build and use. These capabilities can also be used to secure and govern AI apps built with the DeepSeek R1 model and the use of the DeepSeek app. 

The post Securing DeepSeek and other AI systems with Microsoft Security appeared first on Microsoft Security Blog.

]]>
A successful AI transformation starts with a strong security foundation. With a rapid increase in AI development and adoption, organizations need visibility into their emerging AI apps and tools. Microsoft Security provides threat protection, posture management, data security, compliance, and governance to secure AI applications that you build and use. These capabilities can also be used to help enterprises secure and govern AI apps built with the DeepSeek R1 model and gain visibility and control over the use of the seperate DeepSeek consumer app. 

Secure and govern AI apps built with the DeepSeek R1 model on Azure AI Foundry and GitHub 

Develop with trustworthy AI 

Last week, we announced DeepSeek R1’s availability on Azure AI Foundry and GitHub, joining a diverse portfolio of more than 1,800 models.   

Customers today are building production-ready AI applications with Azure AI Foundry, while accounting for their varying security, safety, and privacy requirements. Similar to other models provided in Azure AI Foundry, DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. Microsoft’s hosting safeguards for AI models are designed to keep customer data within Azure’s secure boundaries. 

azure AI content Safety

Learn more

With Azure AI Content Safety, built-in content filtering is available by default to help detect and block malicious, harmful, or ungrounded content, with opt-out options for flexibility. Additionally, the safety evaluation system allows customers to efficiently test their applications before deployment. These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently build and deploy AI solutions. See Azure AI Foundry and GitHub for more details.

Start with Security Posture Management

Microsoft Defender for Cloud

Learn more

AI workloads introduce new cyberattack surfaces and vulnerabilities, especially when developers leverage open-source resources. Therefore, it’s critical to start with security posture management, to discover all AI inventories, such as models, orchestrators, grounding data sources, and the direct and indirect risks around these components. When developers build AI workloads with DeepSeek R1 or other AI models, Microsoft Defender for Cloud’s AI security posture management capabilities can help security teams gain visibility into AI workloads, discover AI cyberattack surfaces and vulnerabilities, detect cyberattack paths that can be exploited by bad actors, and get recommendations to proactively strengthen their security posture against cyberthreats.

AI security posture management in Defender for Cloud identifies an attack path to a DeepSeek R1 workload, where an Azure virtual machine is exposed to the Internet.
Figure 1. AI security posture management in Defender for Cloud detects an attack path to a DeepSeek R1 workload.

By mapping out AI workloads and synthesizing security insights such as identity risks, sensitive data, and internet exposure, Defender for Cloud continuously surfaces contextualized security issues and suggests risk-based security recommendations tailored to prioritize critical gaps across your AI workloads. Relevant security recommendations also appear within the Azure AI resource itself in the Azure portal. This provides developers or workload owners with direct access to recommendations and helps them remediate cyberthreats faster. 

Safeguard DeepSeek R1 AI workloads with cyberthreat protection

While having a strong security posture reduces the risk of cyberattacks, the complex and dynamic nature of AI requires active monitoring in runtime as well. No AI model is exempt from malicious activity and can be vulnerable to prompt injection cyberattacks and other cyberthreats. Monitoring the latest models is critical to ensuring your AI applications are protected.

Integrated with Azure AI Foundry, Defender for Cloud continuously monitors your DeepSeek AI applications for unusual and harmful activity, correlates findings, and enriches security alerts with supporting evidence. This provides your security operations center (SOC) analysts with alerts on active cyberthreats such as jailbreak cyberattacks, credential theft, and sensitive data leaks. For example, when a prompt injection cyberattack occurs, Azure AI Content Safety prompt shields can block it in real-time. The alert is then sent to Microsoft Defender for Cloud, where the incident is enriched with Microsoft Threat Intelligence, helping SOC analysts understand user behaviors with visibility into supporting evidence, such as IP address, model deployment details, and suspicious user prompts that triggered the alert. 

When a prompt injection attack occurs, Azure AI Content Safety prompt shields can detect and block it. The signal is then enriched by Microsoft Threat Intelligence, enabling security teams to conduct holistic investigations into the incident.
Figure 2. Microsoft Defender for Cloud integrates with Azure AI to detect and respond to prompt injection cyberattacks.

Additionally, these alerts integrate with Microsoft Defender XDR, allowing security teams to centralize AI workload alerts into correlated incidents to understand the full scope of a cyberattack, including malicious activities related to their generative AI applications. 

A jailbreak prompt injection attack on a Azure AI model deployment was flagged as an alert in Defender for Cloud.
Figure 3. A security alert for a prompt injection attack is flagged in Defender for Cloud

Secure and govern the use of the DeepSeek app

In addition to the DeepSeek R1 model, DeepSeek also provides a consumer app hosted on its local servers, where data collection and cybersecurity practices may not align with your organizational requirements, as is often the case with consumer-focused apps. This underscores the risks organizations face if employees and partners introduce unsanctioned AI apps leading to potential data leaks and policy violations. Microsoft Security provides capabilities to discover the use of third-party AI applications in your organization and provides controls for protecting and governing their use.

Secure and gain visibility into DeepSeek app usage 

Microsoft Defender for Cloud Apps

Learn more

Microsoft Defender for Cloud Apps provides ready-to-use risk assessments for more than 850 Generative AI apps, and the list of apps is updated continuously as new ones become popular. This means that you can discover the use of these Generative AI apps in your organization, including the DeepSeek app, assess their security, compliance, and legal risks, and set up controls accordingly. For example, for high-risk AI apps, security teams can tag them as unsanctioned apps and block user’s access to the apps outright.

Security teams can discover the usage of GenAI applications, assess risk factors, and tag high-risk apps as unsanctioned to block end users from accessing them.
Figure 4. Discover usage and control access to Generative AI applications based on their risk factors in Defender for Cloud Apps.

Comprehensive data security 

Data security

Learn more

In addition, Microsoft Purview Data Security Posture Management (DSPM) for AI provides visibility into data security and compliance risks, such as sensitive data in user prompts and non-compliant usage, and recommends controls to mitigate the risks. For example, the reports in DSPM for AI can offer insights on the type of sensitive data being pasted to Generative AI consumer apps, including the DeepSeek consumer app, so data security teams can create and fine-tune their data security policies to protect that data and prevent data leaks. 

In the report from Microsoft Purview Data Security Posture Management for AI, security teams can gain insights into sensitive data in user prompts and unethical use in AI interactions. These insights can be broken down by apps and departments.
Figure 5. Microsoft Purview Data Security Posture Management (DSPM) for AI enables security teams to gain visibility into data risks and get recommended actions to address them.

Prevent sensitive data leaks and exfiltration  

Microsoft Purview Data Loss Prevention

Learn more

The leakage of organizational data is among the top concerns for security leaders regarding AI usage, highlighting the importance for organizations to implement controls that prevent users from sharing sensitive information with external third-party AI applications.

Microsoft Purview Data Loss Prevention (DLP) enables you to prevent users from pasting sensitive data or uploading files containing sensitive content into Generative AI apps from supported browsers. Your DLP policy can also adapt to insider risk levels, applying stronger restrictions to users that are categorized as ‘elevated risk’ and less stringent restrictions for those categorized as ‘low-risk’. For example, elevated-risk users are restricted from pasting sensitive data into AI applications, while low-risk users can continue their productivity uninterrupted. By leveraging these capabilities, you can safeguard your sensitive data from potential risks from using external third-party AI applications. Security admins can then investigate these data security risks and perform insider risk investigations within Purview. These same data security risks are surfaced in Defender XDR for holistic investigations.

 When a user attempts to copy and paste sensitive data into the DeepSeek consumer AI application, they are blocked by the endpoint DLP policy.
Figure 6. Data Loss Prevention policy can block sensitive data from being pasted to third-party AI applications in supported browsers.

This is a quick overview of some of the capabilities to help you secure and govern AI apps that you build on Azure AI Foundry and GitHub, as well as AI apps that users in your organization use. We hope you find this useful!

To learn more and to get started with securing your AI apps, take a look at the additional resources below:  

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post Securing DeepSeek and other AI systems with Microsoft Security appeared first on Microsoft Security Blog.

]]>
Build a stronger security strategy with proactive and reactive incident response: Cyberattack Series http://approjects.co.za/?big=en-us/security/blog/2025/02/10/build-a-stronger-security-strategy-with-proactive-and-reactive-incident-response-cyberattack-series/ Mon, 10 Feb 2025 17:00:00 +0000 Find out how a cyberattack by Storm-2077 was halted faster because the Microsoft Incident Response team is both proactive and reactive at the same time.

The post Build a stronger security strategy with proactive and reactive incident response: Cyberattack Series appeared first on Microsoft Security Blog.

]]>
There are countless statistics about cybercrime and one of the most impactful is that for threat actors. Their profits continue to increase year over year and are on track to rise from $9.22 trillion in 2024 to $13.82 trillion by 2028.1 If the financial drain caused by threat actors were pooled it would be ranked as the third largest gross domestic product (GDP) by country, trailing behind the number two spot, which is China at $18.27 trillion.2

That statistic alone tells us a great deal about the importance of preparedness for a potential cyberattack, which includes a robust incident response plan. To create such a plan, it is critical to understand potential risks, and one of the best ways to do that is to conduct a proactive threat hunt and compromise assessment.

Microsoft Incident Response is made up of highly skilled investigators, researchers, engineers, and analysts who specialize in handling global security incidents. In addition to reactive response, they also conduct proactive compromise assessments to find threat actor activity. They’ll provide recommendations and best practice guidance to strengthen an organization’s security posture.

Security practitioners at work in a security operations center.

Microsoft Incident Response

Your first call before, during, and after a cybersecurity incident.

Microsoft Incident Response compromise assessments utilizes the same methodology and resources as those used in an investigation but without the time pressure and crisis-driven decision making associated with a live cyberattack. Compromise assessments are often used by those who have had a prior incident and want to measure their security posture after the implementation of new security measures. Some customers use the service as an annual assessment prior to locking down change controls. Others may use it to assess the environment of an acquisition prior to joining infrastructures.

What happens when a compromise assessment turns into a reactive incident response engagement? Let’s dive into a recent situation where our team encountered this very scenario.

Why differentiate between proactive and reactive investigations?

What are indicators of compromise?

Read more

It is important to understand the key differences between proactive and reactive investigations, as each has different goals and measures for success. Microsoft Incident Response’s proactive compromise assessments are focused on detection and prevention, which includes identifying potential indicators of compromise (IOCs), bringing attention to potential vulnerabilities, and helping customers mitigate risks by implementing security hardening measures.

Our reactive investigations are centered on incident management during and immediately after a compromise, including incident analysis, threat hunting, tactical containment, and Tier 0 recovery, all while under the pressure of an active cyberattack.

Proactive and reactive incident response are essential capabilities for providing a more robust defense strategy. They enable an organization to address an active cyberattack during a period when time and knowing the next steps are critical. At the same time, it provides experts with the experience needed to help prevent future incidents. Not all organizations have the resources required to maintain an incident response team capable of proactive and reactive approaches and may want to consider using a third-party service.

The importance of Microsoft’s “double duty” incident response experts

When confronted by an active threat actor, two things are at the forefront of success and can’t be lost—time and knowledge.

While conducting a proactive compromise assessment for a nonprofit organization in mid-2024, Microsoft Incident Response began their forensic investigation. Initially identifying small artifacts of interest, the assessment quickly changed as suspicious events began to unfold. At the time the threat actor was not known, but has since been tracked as Storm-2077, a Chinese state actor that has been active since at least January 2024. Storm-2077’s techniques focus on email data theft, using valid credentials harvested from compromised systems. Storm-2077 was lurking in the shadows of the organization’s environment. When they felt they had been detected, these threat actors put their fingers on keyboards and started making moves.

Precious time to remediate was not lost. Microsoft Incident Response immediately switched from proactive to reactive mode. The threat actor created a global administrator account and began disabling legitimate organizational global administrator accounts to gain full control of the environment. The targeted organization’s IT team was already synchronized with Microsoft Incident Response through the active compromise assessment that was taking place. The targeted customer took note of the event and came to Microsoft for deconfliction. Once the activity was determined to be malicious, the organization’s IT team disabled the access, and the proactive incident response investigation converted to being reactive. The threat actor was contained and access was remediated quickly because of this collaboration.

The threat actor had likely been present in the organization’s environment for a few months or more. They had taken advantage of a stolen session token to conduct a token replay attack, and through this had gained access to multiple accounts.

Proactive assessments that don’t utilize reactive investigation teams for delivery may result in a delay in responding or even generate more challenges for the incoming investigation team.

Thankfully, Microsoft Incident Response conducts proactive compromise assessments with the same resources that deliver reactive investigations. They can take immediate action to halt active cyberthreats before they do more harm.

Read the report to go deeper into the details of the cyberattack, including Storm-2077 tactics, the response activity, and lessons that other organizations can learn from this case.

What is the Cyberattack Series?

With our Cyberattack Series, customers will discover how Microsoft Incident Response investigates unique and notable attacks. For each cyberattack story, we will share:

  • How the cyberattack happened.
  • How the breach was discovered.
  • Microsoft’s investigation and eviction of the threat actor.
  • Strategies to avoid similar cyberattacks.

Learn more

To learn more about Microsoft Incident Response capabilities, please visit our website, or reach out to your Microsoft account manager or Premier Support contact.

Download our Unified Security e-book to learn more about how Microsoft can help you be more secure.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Cybercrime Expected To Skyrocket in Coming Years, Statista. February 22, 2024.

2World GDP Rankings 2024 | Top 10 Countries Ranked By GDP, Forbes India. November 4, 2024.

The post Build a stronger security strategy with proactive and reactive incident response: Cyberattack Series appeared first on Microsoft Security Blog.

]]>
3 priorities for adopting proactive identity and access security in 2025 http://approjects.co.za/?big=en-us/security/blog/2025/01/28/3-priorities-for-adopting-proactive-identity-and-access-security-in-2025/ Tue, 28 Jan 2025 17:00:00 +0000 Adopting proactive defensive measures is the only way to get ahead of determined efforts to compromise identities and gain access to your environment.

The post 3 priorities for adopting proactive identity and access security in 2025 appeared first on Microsoft Security Blog.

]]>
If 2024 taught us anything, it’s that a proactive, no-compromises approach to security is essential for 2025 and beyond.

Nation-states and advanced cybercriminals are making significant investments in infrastructure and automation to intensify familiar cyberattack patterns; password attacks, for example, escalated from 579 incidents per second in 20211 to 7,000 in 2024.2 These groups are also adopting emerging technologies such as AI to create deepfakes and personalized spear-phishing campaigns that manipulate people into granting unauthorized access.

Adopting proactive defensive measures is the only way to get ahead of such determined efforts to compromise identities and gain access to your environment.

Microsoft is strengthening our own defenses through the Secure Future Initiative (SFI), a multiyear commitment to advance the way we design, build, test, and operate Microsoft technology to ensure it meets the highest possible standards for security. One of our first steps was to conduct a full inventory of our environment and do a thorough “spring cleaning,” deleting 730,000 outdated and non-compliant apps and removing 5.75 million unused or outdated Microsoft Entra ID systems from production and test areas.3 As part of this process, we deeply examined identity and network access controls, addressed top risks, implemented standard practices, and improved our incident response.

We learned from talking with our largest customers that many are dealing with the exact same issues; they’re also assessing their environments to surface potential vulnerabilities and strengthen their defenses. Based on these learnings and on the evolving behavior of threat actors, we’ve identified three priorities for enhancing identity and access security measures for 2025:

  1. Start secure, stay secure, and prepare for new cyberthreats.
  2. Extend Zero Trust access controls to all resources.
  3. Use generative AI to tip the scales in favor of defenders.

1. Start secure, stay secure, and prepare for new cyberthreats

Many organizations struggle to eliminate technical and security debt while continuing to add new users, resources, and applications. While more of our customers are implementing basic identity security measures, such as multifactor authentication, they may still not enforce them everywhere. Moreover, basic measures aren’t enough to protect against advanced identity attacks such as token theft4 or adversary-in-the-middle phishing.5

It’s essential to understand your entire attack surface, identify all potential entry points, and proactively apply access security that closes any gaps.

Traditional security approaches deploy security tools and measures “as needed.” Unfortunately, the additive approach of starting at 100% open and then dialing up defenses leaves holes that bad actors can exploit and use as launching pads for lateral movement. Reactive security isn’t enough to safeguard your environment. Our guidance for 2025 is to always start at the highest level of security (Secure by Default), then dial back as needed for compatibility or other reasons. It’s also critical to protect all identities: employees, contractors, partners, customers, and, most importantly, machine, service, and AI identities.

Security defaults in Microsoft Entra ID

Learn more

To encourage Secure by Default practices with customers, Microsoft last year mandated the use of multifactor authentication across the Microsoft Azure portal, Microsoft Entra admin center, and Microsoft Intune admin center. To complement security defaults, we started rolling out Microsoft-managed Conditional Access policies for all new tenants to ensure you benefit from baseline risk-based security policies that are pre-configured and turned on by default.6 Tenants that retain security defaults experience 80% fewer compromised accounts than unprotected tenants, while compromise rates have fallen by 20.5% for Microsoft Entra ID Premium tenants with Microsoft-managed policies enabled.6

Outlined below are practical measures that any security leader can implement to improve hygiene and safeguard identities within their organization:

  • Implement multifactor authentication: Prioritize phishing-resistant authentication methods like passkeys, which are considered the most secure option currently available. Require multifactor authentication for all applications, including private and legacy ones. Also consider using high-assurance credentials like digital employee IDs with facial matching for workflows such as new employee onboarding and password resets.
  • Employ risk-based Conditional Access policies and continuous access evaluation: Configure strong Conditional Access policies that initiate additional security measures, such as step-up authentication, automatically for high-risk sign-ins. Allow only just-enough access, and ideally just-in-time access, to critical resources. Augment Conditional Access with continuous access evaluation to ensure ongoing access checks and to protect against token theft.
  • Discover and manage shadow IT: Detect unauthorized apps (also known as shadow IT) and tenants, so you can control access to them. Shadow IT often lacks essential security controls that organizations enforce and manage to prevent compromise. Shadow tenants, often created for development and testing, may lack sufficient security policies and controls. Establish standard processes for creating new tenants that are secure by default and then safely retiring them when they’re no longer needed.
  • Secure access for non-human identities: Start by taking an inventory of your workload identities. Replace secrets, credentials, certificates, and keys with more secure authentication, such as managed identities for Azure resources. Implement least privilege and just-in-time access coupled with granular Conditional Access policies for workload identities.  

To get started: Explore Microsoft Entra ID capabilities for multifactor authentication, Conditional Access, continuous access evaluation, and Microsoft Entra ID Protection. Confirm that security defaults or Microsoft-managed Conditional Access Policies are enabled on all your tenants and obtain guidance on the phishing-resistant authentication methods available in Microsoft Entra ID, including passkeys. Use Microsoft Defender for Cloud Apps to discover and manage shadow IT in your Microsoft network. Adopt managed identities for Azure and workload identity federation, and strengthen access controls for non-human identities with Microsoft Entra Workload ID.

2. Extend Zero Trust access controls to all resources

It’s essential to have visibility, control, and governance over who and what has access to your environment, what they’re trying to do, and why. The goal is to enable flexible work while protecting against escalating cyberthreats. This requires extending Zero Trust access controls to every resource and entry point, including legacy on-premises applications and services, legacy devices and infrastructure, and any internet destinations. Consider how you can reduce effort and errors using automation, while also making it easier for security teams to share insights and collaborate.

Outlined below are key strategies for extending Zero Trust access controls to all resources.

  • Unify your access policy engines across all users, applications, endpoints, and networks to simplify your Zero Trust architecture. Converge access policies for identity security tools and network security tools to eliminate coverage gaps and enforce more robust access controls.
  • Extend modern access controls to all apps and internet resources: Use modern network security tools like Secure Access Service Edge to extend strong authentication, Conditional Access, and continuous access evaluation to legacy on-premises apps, shadow IT apps, and any internet destination. Retire your outdated VPN and configure granular per-app access policies to prevent lateral movement inside your network.
  • Enforce least privilege access: Automate your identity and access lifecycle to ensure that all users only have necessary access as they join your organization and change jobs, and that their access is revoked as soon as they leave. Use cloud human resources systems as a source of authority in join-move-leave workflows to enforce real-time access changes. Eliminate standing privileges and require just-in-time access for sensitive workloads and data. Regularly review access permissions to help prevent lateral movement in case of a user identity compromise.

To get started: Explore the Microsoft Entra Suite to secure user access and simplify Zero Trust deployments. Use entitlement management and lifecycle workflows to automate identity and access lifecycle processes. Use Microsoft Entra Private Access to replace legacy VPN with modern access controls, and use Microsoft Entra Internet Access to extend Conditional Access and conditional access evaluation to any resource, including shadow IT apps and internet destinations. Use Microsoft Entra Workload ID to secure access for non-human identities.

3. Use generative AI to tip the scales in favor of defenders

Generative AI is indispensable for staying ahead of cyberthreats in 2025. It helps defenders identify policy gaps, detect risks, and automate processes to strengthen security practices and defend against threats. A recent study found that within three months, organizations using Microsoft Security Copilot experienced a 30.13% reduction in average time to resolve security incidents.7 For identity teams, the impact is even more pronounced. IT admins using Copilot in the Microsoft Entra admin center spent 45.41% less time troubleshooting sign-ins, and increased accuracy by 46.88%.8

Outlined below are opportunities available to transform the daily work of identity professionals with generative AI:

  • Enhance risky user investigations: Investigate identity compromises faster with AI-powered recommendations for proactive mitigation and defense. Use natural language conversations to investigate risky users and to gain insights into elevated risk levels and risky sign-ins.
  • Troubleshoot sign-ins: Use natural language conversations to uncover root causes of sign-in failures, interruptions, or multifactor authentication prompts. Automate troubleshooting tasks and let AI discover actionable insights across user details, group details, sign-in logs, audit logs, and diagnostic logs.
  • Mitigate app risks: Use intuitive prompts to manage and remediate application risks as well as gain detailed insights into permissions, workload identities, and cyberthreats.

At Microsoft Ignite 2024, we announced the preview of Security Copilot embedded directly into the Microsoft Entra admin center that included new skills to empower identity professionals and security analysts. We’re committed to enhancing Security Copilot to help identity and network security professionals collaborate effectively, respond more swiftly, and get ahead of emerging threats. We encourage you to participate in shaping these tools as we develop them.

To get started: Learn more about getting started with Microsoft Security Copilot.

Our commitment to supporting proactive security measures

By investing in proactive measures in 2025, you can significantly improve your security hygiene and operational resilience. To help you strengthen your defenses, we’re committed to innovating ahead of malicious actors, simplifying security to reduce the burden on security teams, and sharing everything we learn from protecting Microsoft and our customers.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1The passwordless future is here for your Microsoft account, Vasu Jakkal. September 15, 2021.

2Microsoft Digital Defense Report 2024.

3Secure Future Initiative: September 2024 Progress Report, Microsoft.

4How to break the token theft cyber-attack chain, Alex Weinert. June 20, 2024.

5Defeating Adversary-in-the-Middle phishing attacks, Alex Weinert. November 18, 2024.

6Automatic Conditional Access policies in Microsoft Entra streamline identity protection, Alex Weinert. November 3, 2023.

7Generative AI and Security Operations Center Productivity: Evidence from Live Operations, Microsoft. November 2024.

8Randomized Controlled Trials for Security Copilot for IT Administrators, Microsoft. November 2024.

The post 3 priorities for adopting proactive identity and access security in 2025 appeared first on Microsoft Security Blog.

]]>
Fast-track generative AI security with Microsoft Purview http://approjects.co.za/?big=en-us/security/blog/2025/01/27/fast-track-generative-ai-security-with-microsoft-purview/ Mon, 27 Jan 2025 17:00:00 +0000 Read how Microsoft Purview can secure and govern generative AI quickly, with minimal user impact, deployment resources, and change management.

The post Fast-track generative AI security with Microsoft Purview appeared first on Microsoft Security Blog.

]]>
As a data security global black belt, I help organizations secure AI solutions. They are concerned about data oversharing, data leaks, compliance, and other potential risks. Microsoft Purview is Microsoft’s solution for securing and governing data in generative AI.

I’m often asked how long it takes to deploy Microsoft Purview. The answer depends on the specifics of the organization and what they want to achieve. Microsoft Purview should enable a comprehensive data governance program but it can provide risk mitigation for generative AI in the short term while the program is underway.

Microsoft Purview

Secure and govern your entire data estate.

Two colleagues collaborating at a desk.

Organizations need AI solutions to add value for their customers and to stay competitive. They can’t wait for years to secure and govern these systems.

For the organizations deploying generative AI, “how long does it take to deploy Microsoft Purview?” isn’t the right question.

The risk mitigation Microsoft Purview provides for AI can begin on day one. This includes Microsoft AI, like Microsoft 365 Copilot, AI that an organization builds in-house, and AI from third parties like Google Gemini or ChatGPT.

This post will discuss ways we can secure and govern data used or generated by AI quickly, with minimal user impact, change management, and resources required.

These Microsoft Purview solutions are:

  • Microsoft Purview Data Security Posture Management for AI
  • Microsoft Purview Information Protection
  • Microsoft Purview Data Loss Prevention
  • Microsoft Purview Communications Compliance
  • Microsoft Purview Insider Risk Management
  • Microsoft Purview Data Lifecycle Management
  • Microsoft Purview Audit and Microsoft Purview eDiscovery
  • Microsoft Purview Compliance Manager

Here are short term steps you can take while the comprehensive data governance program is underway.

Microsoft Purview Data Security Posture Management for AI

Microsoft Purview Data Security Posture Management for AI (DSPM for AI) provides visibility into data security risks. It reports on:

  • User’s interactions with AI.
  • Sensitive information in the prompts users share with the AI.
  • Whether the sensitive information users share is labeled and thus is protected by durable security policy controls.
  • Whether and how user interactions may be violating company policy including codes of conduct and attempts at jailbreak, where users manipulate the system to circumvent protections.
  • The risk level of users interacting with the system, such as inadvertent or malicious activities they may be involved in that put the organization at risk.

DSPM for AI reports on this for each AI application and can drill down from the reports to the individual user activities. DSPM for AI collects and surfaces insights from the other Microsoft Purview solutions around generative AI risks in a single screen.

Custom sensitive information types, sensitivity labels, and information protection rules are reasoned over by DSPM for AI, but if these are not available, more than 300 out-of-the-box sensitive information types are available from day one.  

DSPM for AI will use these to report on risk for the organization without additional configuration. The organization’s administrators can configure policy to mitigate these risks directly from the DSPM for AI tool.

Screenshot of Data Security Posture Management for AI overview page. It shows interactions with Microsoft 365 Copilot, Enterprise Generative AI  from other providers and AI developed in-house.

Figure 1. DSPM for AI shows interactions with Microsoft 365 Copilot, enterprise generative AI from other providers, and AI developed in-house.

Screenshot of Data Security Posture Management (DSPM) for AI reports showing user interactions with sensitive data for Microsoft 365 Copilot and other generative AI.  Admins can configure policy to mitigate risks from the DSPM solution.

Figure 2. DSPM for AI Reports on generative AI user interactions with sensitive data.

A big concern that organizations have in widely deploying generative AI is that it will return results that contain sensitive information that the user should not have access to. SharePoint sites have been created over the years, are unlabeled, and may be accessible to the entire organization through the AI. The “security by obscurity” that may have prevented the sensitive information from being inappropriately shared is now negated by the AI that reasons over and returns the data.

Data assessments, part of DSPM for AI, and currently in preview, identifies potential oversharing risks and allows the administrator to apply a sensitivity label to the SharePoint sites, the sensitive data, or initiate an Microsoft Entra ID user access review to manage group memberships.

The administrator can engage the business stakeholder who has knowledge of the risk posed by the data and invite them to mitigate the risk or apply the policy at scale from the Microsoft Purview administration portal.

Screenshot of Oversharing Assessment report, a feature of Data Security Posture Management for AI.  Shows the location of sensitive data and allows admins to configure policies to mitigate oversharing risks.

Figure 3. Data assessment—visualize risk, review access, and deploy policy.

Microsoft Purview Information Protection

The document access controls of Microsoft Purview Information Protection, including sensitivity labels, are enforced when the data is reasoned over by AI. The user is given visibility in context that they are working with sensitive information. This awareness empowers users to protect the organization. 

The sensitivity labels that enforce scoped encryption, watermarking, and other protections travel with the document as the user interacts with the AI. When the AI creates new content based on the document, the new content inherits the most restrictive label and policy.

Microsoft Purview can automatically apply sensitivity labels to AI interactions based on the organization’s existing policy for email, desktop applications, and Microsoft Teams, or new policy can be deployed for the AI.

These can be based on out-of-the-box sensitive information types for a quick start.

Microsoft Purview Data Loss Prevention

The Microsoft Purview Data Loss Prevention policies that the organization currently uses for email, desktop applications, and Teams can be extended to the AI or new policy for the AI can be created. Cut and paste of sensitive information or transfer of a labeled document into the AI can be prevented or only allowed with an auditable justification from the user.

A rule can be configured to prevent all documents bearing a specific label from being reasoned over by the AI. Out-of-the-box sensitive information types can be used for a quick start.

Microsoft Purview Communication Compliance

Microsoft Purview Communication Compliance provides the ability to detect regulatory compliance (for example, SEC or FINRA) and business conduct violations such as sensitive or confidential information, harassing or threatening language, and sharing of adult content.

Out-of-the-box policies can be used to monitor user prompts or AI-generated content. It provides policy enforcement in near real time and also audit logs and reporting.

Microsoft Purview Insider Risk Management

Microsoft Purview Insider Risk Management correlates signal to identify potential malicious or accidental behaviors from legitimate users. Pre-configured generative AI-specific risk detections and policy templates are now available in preview.

As the Insider Risk Management solution algorithms determine a user to be engaging in risky behavior, the data loss prevention (DLP) policies for that user can be made stricter using a feature called Adaptive Protection. It can be configured with out-of-the-box policies. This continuous monitoring and policy modulation mitigates risk while reducing administrator workload.

AI analytics can be activated from the Microsoft Purview portal to provide insights even before the Insider Risk Management solution is deployed to users. This quickly surfaces AI risks with minimal administrative workload.

Microsoft Purview Data Lifecycle Management

Microsoft Purview can enforce AI Data Lifecycle Management, with retention of AI prompts, prompt returns, and the documents AI creates for a specified time period. This can be done globally for every interaction with an AI solution. It can be done with out-of-the-box or custom policies. This will keep these interactions available for future investigations, for regulatory compliance, or to tune policies and inform the governance program.

A policy for deletion of AI interactions can be enforced so information is not over-retained.

Microsoft Purview Audit and Microsoft Purview eDiscovery

The organization will need to support internal investigations around the use of AI. Microsoft Purview Audit logs and retains these interactions. They also need to support their legal team should they have to produce AI interactions to support litigation.

Microsoft Purview eDiscovery can put a user’s interactions with the AI as well as their other Microsoft 365 documents and communications on hold so that their availability to support investigations is maintained. It allows them to be searched based metadata, enhancing relevancy, annotated, and produced.

Microsoft Purview Compliance Manager

Microsoft Purview Compliance Manager has pre-built assessments for AI regulations including:

  • EU Artificial Intelligence Act.
  • ISO/IEC 23894:2023.
  • ISO/IEC 42001:2023.
  • NIST AI Risk Management Framework (RMF) 1.0.

These assessments are available to benchmark compliance over time, report on control status, and maintain and produce evidence for both Microsoft and the organization’s activities that support the regulatory compliance program.

Microsoft Purview is an AI enabler

Without security, governance, and compliance bases being covered, the AI program puts the organization at risk. An AI program can be blocked before it deploys if the team can’t demonstrate how it is mitigating these risks.

The actions suggested here can all be taken quickly, and with limited effort, to set up a generative AI deployment for success.

Learn more

Learn more about Microsoft Purview.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and Twitter (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Fast-track generative AI security with Microsoft Purview appeared first on Microsoft Security Blog.

]]>
Why security teams rely on Microsoft Defender Experts for XDR for managed detection and response http://approjects.co.za/?big=en-us/security/blog/2025/01/06/why-security-teams-rely-on-microsoft-defender-experts-for-xdr-for-managed-detection-and-response/ Mon, 06 Jan 2025 17:00:00 +0000 Microsoft Defender Experts for XDR is a mature and proven service that triages, investigates, and responds to incidents and hunts for threats on a customer’s behalf around the clock. Learn more about why organizations across major industries rely on it.

The post Why security teams rely on Microsoft Defender Experts for XDR for managed detection and response appeared first on Microsoft Security Blog.

]]>
The expanding attack surface is creating more opportunities for exploitation and adding to the pressure on security leaders and teams. Increasingly, organizations are investing in managed detection and response services (MDR) to bolster their security operations center (SOC) and meet the challenge. Demand is growing rapidly: according to Frost & Sullivan, the market for MDR is expanding at a rate of 35.2% annually.  

While there are new vendors launching MDR services regularly, many security teams are turning to Microsoft Defender Experts for XDR, a recognized leader, to deliver comprehensive coverage.1 Employed worldwide by organizations across industries, Microsoft’s team of dedicated experts proactively hunts for cyberthreats and triages, investigates, and responds to incidents on a customer’s behalf around the clock across their most critical assets. Our proven service brings together in-house security professionals and industry-leading protection with Microsoft Defender XDR to help security teams rapidly stop cyberthreats and keep their environments secure.2 

Frost & Sullivan names Microsoft Defender Experts for XDR a leader in the Frost Radar™ Managed Detection and Response for 2024.1 

Microsoft Cyber Defense Operations Center with several people sitting at computers

Microsoft Defender Experts for XDR

Give your security operations center team coverage with end-to-end protection and expertise.

Reduce the staffing burden, improve security coverage, and focus on other priorities

Microsoft Defender Experts for XDR improves operational efficacy greatly while elevating an organization’s security posture to a new level. The team of experts will monitor the environment, find and halt cyberthreats, and help contain incidents faster with human-led response and remediation. With Defender Experts for XDR, organizations will expand their threat protection capabilities, reduce the number of incidents over time, and have more resources to focus on other priorities.

More experts on your side

Scaling in-house security teams remains challenging. Security experts are not only scarce but expensive. The persistent gap in open security positions has widened to 25% since 2022, meaning one in four in-house security analyst positions will remain unfilled.3 In the Forrester Consulting New Technology Project Total Economic Impact study, without Defender Experts for XDR, the in-house team size for the composite organization would need to increase by up to 30% in mid-impact scenario or 40% in high-impact scenario in year one to provide the same level of threat detection service.4 When you consider the lack of available security talent, increasing an in-house team size by 40% poses significant security concerns to CISOs. Existing security team members won’t be able to perform all the tasks required. Many will be overworked, which may lead to burnout.

With more than 34,000 full-time equivalent security engineers, Microsoft is one of the largest security companies in the world. Microsoft Defender Experts for XDR reinforces your security team with Microsoft security professionals to help reduce talent gap concerns. In addition to the team of experts, customers have additional Microsoft security resources to help with onboarding, recommendations, and strategic insights.

“Microsoft has the assets and people I needed. All the technologies, Microsoft Azure, and a full software stack end-to-end, all combined together with the fabric of security. Microsoft [Defender Experts for XDR] has the people and the ability to hire and train those people with the most upmost skill set to deal with the issues we face.”

—Head of Cybersecurity Response Architecture, financial services industry

Accelerate and expand protection against today’s cyberthreats

Microsoft Defender Experts for XDR deploys quickly. That’s welcome news to organizations concerned about maturing their security program and can’t wait for new staffing and capabilities to be developed in-house. Customers can quickly leverage the deep expertise of the Microsoft Defender Experts for XDR team to tackle the increasing number of sophisticated threats. 

What is phishing?

Learn more

CISOs and security teams know that phishing attacks continue to rise because cybercriminals are finding success. Email remains the most common method for phishing attacks, with 91% of all cyberattacks beginning with a phishing email. Phishing is the primary method for delivering ransomware, accounting for 45% of all ransomware attacks. Financial institutions are most targeted at 27.7% followed by nearly all other industries.5

According to internal Microsoft Defender Experts for XDR statistics, roughly 40% of halted threats are phishing.

Microsoft Defender Experts for XDR is a managed extended detection and response service (MXDR). MXDR is an evolution of traditional MDR services, which primarily focuses on endpoints. Our MXDR service has greater protection across endpoints, email and productivity tools, identities, and cloud apps—ensuring the detection and disruption of many cyberthreats, such as phishing, that would not be covered by endpoint-only managed services. That expanded and consolidated coverage enables Microsoft Defender Experts for XDR to find even the most emergent threats. For example, our in-house team identified and disrupted a significant Octo Tempest operation that was working across previously siloed domains. 

The reduction in the likelihood of breaches with Microsoft Defender Experts for XDR is roughly 20% and is worth $261,000 to $522,000 over three years with Defender Experts.4

In addition to detecting, triaging, and responding to cyberthreats, Microsoft Defender Experts for XDR publishes insights to keep organizations secure. That includes recent blogs on file hosting services abuse and phishing abuse of remote monitoring and management tools. As well, the MXDR service vetted roughly 45 indicators related to adversary-in-the-middle, password spray, and multifactor authentication fatigue and added them to Spectre to help keep organizations secure.

From September 2024 through November 2024, Microsoft Security published multiple cyberthreat articles covering real-world exploration topics such as Roadtools, AzureHound, Fake Palo Alto GlobalProtect, AsyncRAT via ScreenConnect, Specula C2 Framework, SectopRAT campaign, Selenium Grid for Cryptomining, and Specula.

“The Microsoft MXDR service, Microsoft Defender Experts for XDR, is helping our SOC team around the clock and taking our security posture to the next level. On our second day of using the service, there was an alert we had previously dismissed, but Microsoft continued the investigation and identified a machine in our environment that was open to the internet. It was created by a threat actor using a remote desktop protocol (RDP). Microsoft Defender Experts for XDR’s MXDR investigation and response to remediate the issue was immediately valuable to us.”

—Director of Security Operations, financial services industry

Halt cyberthreats before they do damage

In 2024 the mean time for the average organization to identify a breach was 194 days and containment 64 days.6  Organizations must proactively look for cyberattackers across unified cross-domain telemetry versus relying solely on disparate product alerts. Proactive threat hunting is no longer a nice-to-have in an organization’s security practice. It’s a must-have to detect cyberthreats faster before they can do significant harm.

When every minute counts, Microsoft Defender Experts for XDR can help speed up the detection of an intrusion with proactive threat hunting informed by Microsoft’s threat intelligence, which tracks more than 1,500 unique cyberthreat groups and correlates insights from 78 trillion security signals per day.7

Microsoft Defender Experts for Hunting proactively looks for threats around the clock across endpoints, email, identity, and cloud apps using Microsoft Defender and other signals. Threat hunting leverages advanced AI and human expertise to probe deeper and rapidly correlate and expose cyberthreats across an organization’s security stack. With visibility across diverse, cross-domain telemetry and threat intelligence, Microsoft Defender Experts for Hunting extends in-house threat hunting capabilities to provide an additional layer of threat detection to improve a SOC’s overall threat response and security efficacy.

In a recent survey, 63% of organizations saw a measurable improvement in their security posture with threat hunting. 49% saw a reduction in network and endpoint attacks along with more accurate threat detection and a reduction of false positives.8

Microsoft Defender Experts for Hunting enables organizations to detect and mitigate cyberthreats such as advanced persistent threats or zero-day vulnerabilities. By actively seeking out hidden risks and reducing dwell time, threat hunting minimizes potential damage, enhances incident response, and strengthens overall security posture.

Microsoft Defender Experts for XDR, which includes Microsoft Defender Experts for Hunting, allows customers to stay ahead of sophisticated threat actors, uncover gaps in defenses, and adapt to an ever-evolving cyberthreat landscape.

“Managed threat hunting services detect and address security threats before they become major incidents, reducing potential damage. By implementing this (Defender Experts for Hunting), we enhance our cybersecurity posture by having experts who continuously look for hidden threats, ensuring the safety of our data, reputation, and customer trust.”

—CISO, technology industry

Spend less to get more

Microsoft Defender Experts for XDR helps CISOs do more with their security budgets. According to a 2024 Forrester Total Economic Impact™ study, Microsoft Defender Experts for XDR generated a project return on investment (ROI) of up to 254% with a projected net present value of up to $6.1 million for the profiled composite company.4

Microsoft Defender Experts for XDR includes trusted advisors who provide insights on operationalizing Microsoft Defender XDR for optimal security efficacy. This helps reduce the burden on in-house security and IT teams so they can focus on other projects.

Beyond lowering security operations costs, the Forrester study noted Microsoft Defender Experts for XDR efficiency gains for surveyed customers, including a 49% decrease in security-related IT help desk tickets. Other productivity gains included freeing up 42% of available full time employee hours and lowering general IT security-related project hours by 20%.4

Learn how Microsoft Defender Experts for XDR can improve organizational security

Microsoft Defender Experts for XDR is Microsoft’s MXDR service. It delivers round-the-clock threat detection, investigation, and response capabilities, along with proactive threat hunting. Designed to help close the security talent gap and enhance organizational security postures, the MXDR service combines Microsoft’s advanced Microsoft Defender XDR capabilities with dedicated security experts to tackle cyberthreats like phishing, ransomware, and zero-day vulnerabilities. Offering rapid deployment, significant ROI (254%, as per Forrester), and operational efficiencies, Microsoft Defender Experts for XDR reduces incident and alerts volume, improves the security posture, and frees up in-house resources. Organizations worldwide benefit from these scalable solutions, leveraging Microsoft’s threat intelligence and security expertise to stay ahead of evolving cyberthreats.

To learn more, please visit Microsoft Defender Experts for XDR or contact your Microsoft security representative.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Frost & Sullivan names Microsoft a Leader in the Frost Radar™: Managed Detection and Response, 2024, Srikanth Shoroff. March 25, 2024.

2Microsoft a Leader in the Forrester Wave for XDR, Microsoft Security Blog. June 3, 2024.

3ISC2 Cybersecurity Workforce Report, 2024.

4Forrester Consulting study commissioned by Microsoft, 2024, New Technology: The Projected Total Economic Impact™ of Microsoft Defender Experts For XDR.

52024 Phishing Facts and Statistics, Identitytheft.org.

6Time to identify and contain data breaches global 2024, Statista.

7Microsoft Digital Defense Report, 2024.

8SANS 2024 Threat Hunting Survey, March 19, 2024.

The post Why security teams rely on Microsoft Defender Experts for XDR for managed detection and response appeared first on Microsoft Security Blog.

]]>