Security Archives | Microsoft AI Blogs http://approjects.co.za/?big=en-us/ai/blog/topic/security/ Thu, 24 Apr 2025 16:00:00 +0000 en-US hourly 1 New whitepaper outlines the taxonomy of failure modes in AI agents http://approjects.co.za/?big=en-us/security/blog/2025/04/24/new-whitepaper-outlines-the-taxonomy-of-failure-modes-in-ai-agents/ Thu, 24 Apr 2025 16:00:00 +0000 Read the new whitepaper from the Microsoft AI Red Team to better understand the taxonomy of failure mode in agentic AI.

The post New whitepaper outlines the taxonomy of failure modes in AI agents appeared first on Microsoft AI Blogs.

]]>
We are releasing a taxonomy of failure modes in AI agents to help security professionals and machine learning engineers think through how AI systems can fail and design them with safety and security in mind.

The taxonomy continues Microsoft AI Red Team’s work to lead the creation of systematization of failure modes in AI; in 2019, we published one of the earliest industry efforts enumerating the failure modes of traditional AI systems. In 2020, we partnered with MITRE and 11 other organizations to codify the security failures in AI systems as Adversarial ML Threat Matrix, which has now evolved into MITRE ATLAS™. This effort is another step in helping the industry think through what the safety and security failures in the fast-moving and highly impactful agentic AI space are.

Taxonomy of Failure Mode in Agentic AI Systems

Microsoft’s new whitepaper explains the taxonomy of failure modes in AI agents, aimed at enhancing safety and security in AI systems.

Computer programmer working at night in office.

To build out this taxonomy and ensure that it was grounded in concrete and realistic failures and risk, the Microsoft AI Red Team took a three-prong approach:

  • We catalogued the failures in agentic systems based on Microsoft’s internal red teaming of our own agent-based AI systems.
  • Next, we worked with stakeholders across the company—Microsoft Research, Microsoft AI, Azure Research, Microsoft Security Response Center, Office of Responsible AI, Office of the Chief Technology Officer, other Security Research teams, and several organizations within Microsoft that are building agents to vet and refine this taxonomy.
  • To make this useful to those outside of Microsoft, we conducted systematic interviews with external practitioners working on developing agentic AI systems and frameworks to polish the taxonomy further.

To help frame this taxonomy in a real-world application for readers, we also provide a case study of the taxonomy in action. We take a common agentic AI feature of memory and we walk through how an cyberattacker could corrupt an agent’s memory and use that as a pivot point to exfiltrate data.

Figure showing the failure modes in Agentic AI systems as organized by Safety, Security and whether the harm is novel or existing.

Figure 1. Failure modes in agentic AI systems.

Core concepts in the taxonomy

While identifying and categorizing the different failure modes, we broke them down across two pillars, safety and security.

  • Security failures are those that result in core security impacts, namely a loss of confidentiality, availability, or integrity of the agentic AI system; for example, such a failure allowing a threat actor to alter the intent of the system.
  • Safety failure modes are those that affect the responsible implementation of AI, often resulting in harm to the users or society at large; for example, a failure that causes the system to provide differing quality of service to different users without explicit instructions to do so.

We then mapped the failures along two axes—novel and existing.

  1. Novel failure modes are unique to agentic AI and have not been observed in non-agentic generative AI systems, such as failures that occur in the communication flow between agents within a multiagent system.
  2. Existing failure modes have been observed in other AI systems, such as bias or hallucinations, but gain in importance in agentic AI systems due to their impact or likelihood.

As well as identifying the failure modes, we have also identified the effects these failures could have on the systems they appear in and the users of them. Additionally we identified key practices and controls that those building agentic AI systems should consider to mitigate the risks posed by these failure modes, including architectural approaches, technical controls, and user design approaches that build upon Microsoft’s experience in securing software as well as generative AI systems.

The taxonomy provides multiple insights for engineers and security professionals. For instance, we found that memory poisoning is particularly insidious in AI agents, with the absence of robust semantic analysis and contextual validation mechanisms allows malicious instructions to be stored, recalled, and executed. The taxonomy provides multiple strategies to combat this, such as limiting the agent’s ability to autonomously store memories by requiring external authentication or validation for all memory updates, limiting which components of the system have access to the memory, and controlling the structure and format of items stored in memory.

How to use this taxonomy

  1. For engineers building agentic systems:
    • We recommend that this taxonomy is used as part of designing the agent, augmenting the existing Security Development Lifecycle and threat modeling practice. The guide helps walk through the different harms and the potential impact.
    • For each harm category, we provide suggested mitigation strategies that are technology agnostic to kickstart the process.
  2. For security and safety professionals:
    • This is a guide on how to probe AI systems for failures before the system launches. It can be used to generate concrete attack kill chains to emulate real world cyberattackers.
    • This taxonomy can also be used to help inform defensive strategies for your agentic AI systems, including providing inspiration for detection and response opportunities.
  3. For enterprise governance and risk professionals, this guide can help provide an overview of not just the novel ways these systems can fail but also how these systems inherit the traditional and existing failure modes of AI systems.

Learn more

Like all taxonomies, we consider this a first iteration and hope to continually update it, as we see the agent technology and cyberthreat landscape change. If you would like to contribute, please reach out to airt-agentsafety@microsoft.com.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


The taxonomy was led by Pete Bryan; the case study on poisoning memory was led by Giorgio Severi. Others that contributed to this work: Joris de Gruyter, Daniel Jones, Blake Bullwinkel, Amanda Minnich, Shiven Chawla, Gary Lopez, Martin Pouliot,  Whitney Maxwell, Katherine Pratt, Saphir Qi, Nina Chikanov, Roman Lutz, Raja Sekhar Rao Dheekonda, Bolor-Erdene Jagdagdorj, Eugenia Kim, Justin Song, Keegan Hines, Daniel Jones, Richard Lundeen, Sam Vaughan, Victoria Westerhoff, Yonatan Zunger, Chang Kawaguchi, Mark Russinovich, Ram Shankar Siva Kumar.

The post New whitepaper outlines the taxonomy of failure modes in AI agents appeared first on Microsoft AI Blogs.

]]>
3 new ways AI agents can help you do even more https://news.microsoft.com/source/features/ai/3-new-ways-ai-agents-can-help-you-do-even-more/ Mon, 14 Apr 2025 15:00:08 +0000 Category: AI April 14, 2025 3 new ways AI agents can help you do even more By Samantha Kubota The word “agent” might remind us of a human who plans travel or maybe a well-dressed British spy. But in the rapidly evolving world of AI, the term has a whole new meaning that is reshaping

The post 3 new ways AI agents can help you do even more appeared first on Microsoft AI Blogs.

]]>

April 14, 2025

3 new ways AI agents can help you do even more

The word “agent” might remind us of a human who plans travel or maybe a well-dressed British spy. But in the rapidly evolving world of AI, the term has a whole new meaning that is reshaping our interaction with technology and automation.  

As the technology continues to advance, new Microsoft AI agents unveiled over the past few weeks can help people every day with things like research, cybersecurity and more.  

Imagine having a personal assistant that doesn’t just respond to commands but anticipates your needs, does complex tasks and keeps learning from every interaction — meaning it actually improves over time.  

AI agents analyze their environment, make decisions and take actions, tackling tasks with you or on your behalf based on your goals and guardrails. That means that instead of doing repetitive tasks, you can save time and focus on more creative and strategic work. 

Two new reasoning agents announced in late March for Microsoft 365 Copilot can help you be more productive in the office. Named Researcher and Analyst, both can securely analyze your work data — emails, meetings, files, chats and more — and the web to deliver highly skilled expertise on demand. 

Researcher helps you tackle complex, multi-step research at work. It can build a detailed marketing strategy based on your work data and broader info from the web, identify opportunities for a new product based on emerging trends and internal data, or create a comprehensive quarterly report for a client review. It can also integrate data from external sources such as Salesforce, ServiceNow and Confluence directly into Microsoft 365 Copilot. 

Researcher combines OpenAI’s deep research model with Microsoft 365 Copilot’s advanced orchestration and deep search capabilities. 

Analyst, built on OpenAI’s o3-mini reasoning model, thinks like a virtual data scientist. It can take raw data scattered across multiple spreadsheets to do things like forecast how much demand there will be for a new product or build a visualization of customer purchasing patterns.  

Other new agents can help organizations defend against cyberthreats, handling certain security tasks to help human teams be more efficient.  

These agents, introduced March 24, are designed to autonomously assist with critical areas such as phishing, data security and identity management.  

For example, a new phishing triage agent in Microsoft Security Copilot can handle routine phishing alerts and cyberattacks, freeing up human cybersecurity teams to focus on more complex cyberthreats and proactive security measures. 

And the new Alert Triage Agents in Microsoft Purview can triage data loss prevention and insider risk alerts, prioritize critical incidents and continuously improve accuracy based on administrator feedback. 

Agents are giving developers new options as well.  

Two new ones are accessible in Azure AI Foundry — a platform where developers and organizations build, deploy and manage AI apps, providing the infrastructure developers need to create intelligent agents on a large scale.  

Microsoft Fabric data agents allow developers using Azure AI Agent Service in Azure AI Foundry to connect customized, conversational agents created in Microsoft Fabric. These data agents can reason over and unlock insights from various sources to make better data-driven decisions. 

For example, NTT DATA, a Japanese IT and consulting company, is using data agents in Microsoft Fabric to have conversations with HR and back-office operations data to better understand what is happening in the organization. 

And the new AI Red Teaming Agent, now in public preview, systematically probes AI models to uncover safety risks. It generates comprehensive reports and tracks improvements over time, creating an AI safety-testing ecosystem that evolves alongside your system.  

Learn more about the latest in agents at Microsoft Build 2025 — registration is now open. 

Image was created using Microsoft Designer, an AI-powered graphic design application.

The post 3 new ways AI agents can help you do even more appeared first on Microsoft AI Blogs.

]]>
Transforming public sector security operations in the AI era http://approjects.co.za/?big=en-us/security/blog/2025/04/01/transforming-public-sector-security-operations-in-the-ai-era/ Tue, 01 Apr 2025 16:00:00 +0000 Read how Microsoft’s unified security operations platform can use generative AI to transform cybersecurity for the public sector.

The post Transforming public sector security operations in the AI era appeared first on Microsoft AI Blogs.

]]>
The cyberthreat landscape is evolving at an unprecedented pace, becoming increasingly dangerous and complex. Nation-state threat actors and cybercriminals are employing advanced tactics and generative AI to execute highly sophisticated attacks. This situation is further compounded by outdated technology and systems, shortage of cybersecurity talent, and antiquated processes, which are inefficient in handling the scale, complexity, and ever-evolving nature of these cyberattacks. With 62% of all cyberattacks targeting public sector organizations, it is crucial for these sectors to leverage state-of-the-art technology, powered by generative AI, to transform their cyber defense and stay ahead of these evolving threats.1

Microsoft’s Unified Security Operations for Public Sector

Discover how Microsoft helps public sectors modernize security operations to enhance cyber defense and streamline processes.

Computer programmer working at night in office.

Microsoft’s unified security operations for public sector

Embracing modern security technology, processes, and continuous skill development is vital for protecting public sector organizations. By leveraging innovations powered by generative AI, unparalleled threat intelligence, and best practices, public sectors can transform their security operations to effectively defend against emerging cyberthreats.

AI-powered security operations: Microsoft delivers innovations to effectively protect against today’s complex threat landscape. The AI-powered unified security operations platform offers an enhanced and streamlined approach to security operations by integrating security information and event management (SIEM), security orchestration, automation, and response (SOAR), extended detection and response (XDR), posture and exposure management, cloud security, threat intelligence, and AI into a single, cohesive experience, eliminating silos and providing end-to-end security operations (SecOps). The unified platform boosts analyst efficiency, reduces context switching, and delivers quicker time to value with less integration work.

Microsoft is committed to helping public sector customers accelerate threat detection and response through improved security posture across organizations with richer insights, multi-tenant management, early warnings, and increased efficiency through automation and generative AI. Through automatic attack disruption, Microsoft Defender XDR utilizes robust threat intelligence, advanced AI and machine learning to detect and contain sophisticated cyberattacks in real time, significantly reducing their impact. This high-fidelity detection and protection capability disrupts more than 40,000 incidents each month, like identity threats and human-operated cyberattacks, while maintaining a false positive rate below 1%.

“Speed is an important factor against adversaries, and gaining situational awareness across a complex landscape of threats is therefore key.”

—Customer in the healthcare industry

People and process modernization: Public-private partnerships play a vital role in fostering the exchange of best practices and developing standardized processes that drive efficiency in incident response and threat intelligence sharing. For example, adapting the threat triage process to leverage generative AI agents can enable teams to scale significantly with agents autonomously analyzing and triaging vast volumes of alerts in real time, prioritize critical cyberthreats, and recommend specific remediation steps based on historical patterns. These collaborations also empower organizations to build teams equipped with cutting-edge skills and a comprehensive understanding of generative AI capabilities, helping them stay ahead of emerging cyberthreats.

Collective cyber defense and threat intelligence: Using Microsoft’s global threat intelligence insights, public sector organizations can collaborate with each other and across other sectors to share deeper cyberthreat insights efficiently. This partnership enables public sector organizations to exchange threat intelligence in a standardized manner within a region or country.

“Collective defense collaborations are driven by mutual interests with industry peers and cybersecurity alliances on improving security postures and responding more effectively to emerging threats.”

—Customer in the transport industry

The power of generative AI in cyber operations

Generative AI brings several transformative benefits to cybersecurity, making it a cornerstone for public sector security operations center (SOC) modernization.

Enhanced threat detection and response: Generative AI has the potential to sift through data from firewalls, endpoints, and cloud workloads, surfacing actionable cyberthreats that might go unnoticed in manual reviews. Unlike traditional rule-based detection methods, generative AI can identify attack patterns, adapt to emerging cyberthreats, and prioritize incidents based on risk severity, helping security teams focus on the most critical issues. Generative AI can go beyond simply surfacing cyberthreats; it can contextualize attack signals, predict potential breaches, and recommend guided responses for remediation strategies, reducing the burden on security analysts. Microsoft Security Copilot is already covering a range of use cases and is expanding rapidly to seize the full potential of generative AI. By providing guided incident investigation and response, Security Copilot helps security operations center (SOC) teams to detect and respond to cyberthreats more effectively. It can help teams to learn about malicious actors and campaigns, provide rapid summaries, and even contact the user to check for suspicious behavior. Adoption is associated with 30% reduction in security incident mean time to resolution (MTTR).2

Reduced operational overheads: By automating routine tasks, generative AI can free analysts from repetitive processes like alert triage or patch validation, enabling them to focus on advanced threat hunting. Security teams can already leverage Security Copilot to translate complex scripts into natural language, highlighting and explaining key parts to enhance team skills and reduce investigation time for advanced investigations as much as by 85%, helping security teams operate at scale.3

“Increased support from AI is critical given the significant capacity challenge in the public sector: a shortage of talent, an influx of threats, and an ever-increasing volume of data, assets, and organizations.”

—National SOC customer

Building a resilient digital future together

As nation-state threat actors and cybercriminals increasingly employ generative AI in their cyberattacks, public sector organizations can no longer rely on fragmented, manual defenses. The path forward lies in public-private collaboration, centered on co-designing and innovating solutions tailored to the public sector’s unique needs.

By adopting Microsoft Security solutions, public sector organizations can leverage combined resources, expertise, and cutting-edge technology to fortify critical infrastructure, safeguard citizen data, and strengthen public trust.

Now is the time to act: Modernize your cyber defense in the AI era to collectively forge a more secure and resilient digital future for government and public sector operations.

Learn more

Learn more about the AI-Powered Security Operations Platform for more details on the unified Security Operations platform.

Learn more about Microsoft Sentinel.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Digital Defense Report 2024

2Generative AI and Security Operations Center Productivity: Evidence from Live Operations, Microsoft study. James Bono, Alec Xu, Justin Grana. November 24, 2024.

3Forrester Total Economic Impact™ of Microsoft Sentinel. The Total Economic Impact(TM) Of Microsoft Sentinel, a commissioned study conducted by Forrester Consulting, March 2024. Results are based on a composite organization representative of interviewed customers.

The post Transforming public sector security operations in the AI era appeared first on Microsoft AI Blogs.

]]>
New innovations in Microsoft Purview for protected, AI-ready data http://approjects.co.za/?big=en-us/security/blog/2025/03/31/new-innovations-in-microsoft-purview-for-protected-ai-ready-data/ Mon, 31 Mar 2025 15:00:00 +0000 Microsoft Purview delivers a comprehensive set of solutions that help customers seamlessly secure and confidently activate data in the era of AI.

The post New innovations in Microsoft Purview for protected, AI-ready data appeared first on Microsoft AI Blogs.

]]>
The Microsoft Fabric and Microsoft Purview teams are excited to be in Las Vegas from March 31 to April 2, 2025, for the second annual and highly anticipated Microsoft Fabric Community Conference. With more than 200 sessions, 13 focused tracks, 21 hands-on workshops, and two keynotes, attendees can expect an engaging and informative experience. The conference offers a unique opportunity for the community to connect and exchange insights on key topics such as data and AI.

AI innovation is impacting every industry, business process, and individual. About 75% of knowledge workers today are currently using some sort of AI in their day to day.1 At the same time, the regulatory landscape is evolving at an unprecedented pace. Around the world, at least 69 countries have proposed more than 1,000 AI-related policy initiatives and legal frameworks to address public concerns around AI safety and governance.2 With the need to adhere to regulations and policy frameworks for AI transformation, a comprehensive solution is needed to address security, governance, and privacy concerns. Additionally, with the convergence of the responsibilities of cybersecurity and data teams, customers are asking for a solution that turns data security and data governance into a team sport to address issues such data discovery, data classification, data loss prevention, and data quality in a unified way. Microsoft Purview delivers a comprehensive set of solutions that address these needs, helping customers seamlessly secure and confidently activate their data in the era of AI.

We are excited to announce new innovations that help security and data teams accelerate their organization’s AI transformation:

  1. Enhancing Microsoft Purview Data Loss Prevention (Purview DLP) support for lakehouse in Microsoft Fabric to help prevent sensitive data loss by restricting access.
  2. Expanding Purview DLP policy support for additional Fabric items such as KQL databases and Mirrored databases to send users notification through policy tips when they are working with sensitive data.
  3. Microsoft Purview integration with Copilot in Fabric, specifically for Power BI.
  4. Data Observability within the Microsoft Purview Unified Catalog.

Seamlessly secure data

Microsoft Purview is extending its proven data security value delivered to millions of Microsoft 365 users worldwide, to the Microsoft data platform. This helps users drive consistency across their multicloud and multiplatform data estate and simplify risks related to data leaks, oversharing, and risky user behavior as more users are managing and handling data in the era of AI.

1. Enhancing Microsoft Purview Data Loss Prevention (DLP) support for lakehouse in Fabric to help prevent sensitive data loss by restricting access

Microsoft Purview Data Security capabilities are used by hundreds of thousands of customers for their integration with Microsoft 365 data. Since last year’s Microsoft Fabric Community Conference, Microsoft Purview has extended Microsoft Purview Information Protection and Purview DLP policy tip value across the data estate, including Fabric. Currently, Purview DLP supports the ability to show users notifications for when they are working with sensitive data in lakehouse. We are excited to share that we are enhancing the DLP value in lakehouse to prevent sensitive data leakage to guest users by restricting access. Data Security admins can configure policies and limit access to only internal users or data owners based on the sensitive data found. This control is valuable for when a Fabric tenant includes guest users and domain owners want to limit access to internal proprietary data in their lakehouses. 

Purview DLP restricting access to a Fabric lakehouse

Figure 1. DLP policy restricting access for guest users into lakehouse due to personally identifiable information (PII) data discovered 

2. Expanding DLP policy support for additional Fabric items such as KQL databases and Mirrored databases to show users notification through policy tips when they are working with sensitive data

A key part of securing sensitive data is to provide visibility to your users on where and how they are interacting with sensitive data. Purview DLP policies can help notify users when they are working with sensitive data through policy tips in lakehouse in Fabric. We are excited to announce that we are extending policy tips support for additional Fabric items—KQL databases and Mirrored databases in preview. (Mirrored Database sources include Azure Cosmos DB, Azure SQL Database, Azure SQL Managed Instance, Azure Databricks Unity Catalog, and Snowflake, with more sources available soon). KQL databases are the only databases used for real-time analytics so detecting sensitive data that comes through real-time analytics is huge for Fabric customers. Purview DLP for Mirrored databases reduces the security risk of sensitive data leakage when data is transferred in Fabric. We are happy to extend Purview DLP value to more data sources, providing end-to-end protection for customers within their Fabric environments, all to prepare for the safe deployment of AI.

Purview DLP triggering a policy tip for a KQL database

Figure 2. Policy tip triggered by Purview DLP due to PII being discovered in KQL databases.

Purview DLP triggering a policy tip for a Mirrored database

Figure 3. Policy tip triggered by Purview DLP due to PII being discovered in Mirrored databases.

3. Microsoft Purview for Copilot in Fabric

As organizations adopt AI, implementing data controls and a Zero Trust approach is crucial to mitigate risks like data oversharing and leakage, and potential non-compliant usage in AI. We are excited to announce Microsoft Purview capabilities in preview for Copilot in Fabric, starting with Copilot for Power BI. By combining Microsoft Purview and Copilot for Power BI, users can:

  • Discover data risks such as sensitive data in user prompts and responses and receive recommended actions in their Microsoft Purview Data Security Posture Management (DSPM) dashboard to reduce these risks.
  • Identify risky AI usage with Microsoft Purview Insider Risk Management to investigate risky AI usage, such as an inadvertent user who has neglected security best practices and shared sensitive data in AI or a departing employee using AI to find sensitive data and exfiltrating the data through a USB device.
  • Govern AI usage with Microsoft Purview Audit, Microsoft Purview eDiscovery, retention policies, and non-compliant usage detection.
Microsoft Purview dashboard view displaying reports on Copilot in Fabric’s interactions over time, user activities, and the data entered and shared within the copilot.

Figure 4. Purview DSPM for AI provides admins with comprehensive reports on Copilot in Fabric’s user activities, as well as data entered and shared within the copilot.

Confidently activate data

4. Data observability, now in preview, within Microsoft Purview Unified Catalog

Within the Unified Catalog in Microsoft Purview, users can easily identify the root cause of data quality issues by visually investigating the relationship between governance domains, data products, glossary terms, and data assets associated with them through its lineage. Data assets and their respective data quality are visible across your multicloud, hybrid data estate. Maintaining high data quality is core to driving trustworthy AI innovation forward, and with the new data observability capabilities in Microsoft Purview, users can now improve how fast they can investigate and resolve root cause issues to improve data quality and respond to regulatory reporting requirements.

Microsoft Purview dashboard displaying data quality within a Data Product.

Figure 5. Lineage view of data assets that showcases data quality within a Data Product.

Microsoft Purview and Microsoft Fabric can help secure and activate data

As your organization continues to implement AI, Microsoft Fabric and Microsoft Purview will serve as key solutions to safely activate your data for AI. Stay tuned for even more exciting innovations to come and check out the Fabric blog to read more about the innovations in Fabric.

Learn more

Explore these resources to stay updated on our product innovations in security and governance for your data:

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Work Trends Index

2AI Regulations around the World – 2025

The post New innovations in Microsoft Purview for protected, AI-ready data appeared first on Microsoft AI Blogs.

]]>
Securing generative AI models on Azure AI Foundry http://approjects.co.za/?big=en-us/security/blog/2025/03/04/securing-generative-ai-models-on-azure-ai-foundry/ Tue, 04 Mar 2025 18:00:00 +0000 Discover how Microsoft secures AI models on Azure AI Foundry, ensuring robust security and trustworthy deployments for your AI systems.

The post Securing generative AI models on Azure AI Foundry appeared first on Microsoft AI Blogs.

]]>
New generative AI models with a broad range of capabilities are emerging every week. In this world of rapid innovation, when choosing the models to integrate into your AI system, it is crucial to make a thoughtful risk assessment that ensures a balance between leveraging new advancements and maintaining robust security. At Microsoft, we are focusing on making our AI development platform a secure and trustworthy place where you can explore and innovate with confidence. 

Here we’ll talk about one key part of that: how we secure the models and the runtime environment itself. How do we protect against a bad model compromising your AI system, your larger cloud estate, or even Microsoft’s own infrastructure?  

How Microsoft protects data and software in AI systems

But before we set off on that, let me set to rest one very common misconception about how data is used in AI systems. Microsoft does not use customer data to train shared models, nor does it share your logs or content with model providers. Our AI products and platforms are part of our standard product offerings, subject to the same terms and trust boundaries you’ve come to expect from Microsoft, and your model inputs and outputs are considered customer content and handled with the same protection as your documents and email messages. Our AI platform offerings (Azure AI Foundry and Azure OpenAI Service) are 100% hosted by Microsoft on its own servers, with no runtime connections to the model providers. We do offer some features, such as model fine-tuning, that allow you to use your data to create better models for your own use—but these are your models that stay in your tenant. 

So, turning to model security: the first thing to remember is that models are just software, running in Azure Virtual Machines (VM) and accessed through an API; they don’t have any magic powers to break out of that VM, any more than any other software you might run in a VM. Azure is already quite defended against software running in a VM attempting to attack Microsoft’s infrastructure—bad actors try to do that every day, not needing AI for it, and AI Foundry inherits all of those protections. This is a “zero-trust” architecture: Azure services do not assume that things running on Azure are safe! 

What is Zero Trust?


Learn more

Now, it is possible to conceal malware inside an AI model. This could pose a danger to you in the same way that malware in any other open- or closed-source software might. To mitigate this risk, for our highest-visibility models we scan and test them before release: 

  • Malware analysis: Scans AI models for embedded malicious code that could serve as an infection vector and launchpad for malware. 
  • Vulnerability assessment: Scans for common vulnerabilities and exposures (CVEs) and zero-day vulnerabilities targeting AI models. 
  • Backdoor detection: Scans model functionality for evidence of supply chain attacks and backdoors such as arbitrary code execution and network calls. 
  • Model integrity: Analyzes an AI model’s layers, components, and tensors to detect tampering or corruption. 

You can identify which models have been scanned by the indication on their model card—no customer action is required to get this benefit. For especially high-visibility models like DeepSeek R1, we go even further and have teams of experts tear apart the software—examining its source code, having red teams probe the system adversarially, and so on—to search for any potential issues before releasing the model. This higher level of scanning doesn’t (yet) have an explicit indicator in the model card, but given its public visibility we wanted to get the scanning done before we had the UI elements ready. 

Defending and governing AI models

Of course, as security professionals you presumably realize that no scans can detect all malicious action. This is the same problem an organization faces with any other third-party software, and organizations should address it in the usual manner: trust in that software should come in part from trusted intermediaries like Microsoft, but above all should be rooted in an organization’s own trust (or lack thereof) for its provider.  

For those wanting a more secure experience, once you’ve chosen and deployed a model, you can use the full suite of Microsoft’s security products to defend and govern it. You can read more about how to do that here: Securing DeepSeek and other AI systems with Microsoft Security.

And of course, as the quality and behavior of each model is different, you should evaluate any model not just for security, but for whether it fits your specific use case, by testing it as part of your complete system. This is part of the wider approach to how to secure AI systems which we’ll come back to, in depth, in an upcoming blog. 

Using Microsoft Security to secure AI models and customer data

In summary, the key points of our approach to securing models on Azure AI Foundry are: 

  1. Microsoft carries out a variety of security investigations for key AI models before hosting them in the Azure AI Foundry Model Catalogue, and continues to monitor for changes that may impact the trustworthiness of each model for our customers. You can use the information on the model card, as well as your trust (or lack thereof) in any given model builder, to assess your position towards any model the way you would for any third-party software library. 
  1. All models hosted on Azure are isolated within the customer tenant boundary. There is no access to or from the model provider, including close partners like OpenAI. 
  1. Customer data is not used to train models, nor is it made available outside of the Azure tenant (unless the customer designs their system to do so). 

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Securing generative AI models on Azure AI Foundry appeared first on Microsoft AI Blogs.

]]>
​​Join us for the end-to-end Microsoft RSAC 2025 Conference experience http://approjects.co.za/?big=en-us/security/blog/2025/02/18/join-us-for-the-end-to-end-microsoft-rsac-2025-conference-experience/ Tue, 18 Feb 2025 17:00:00 +0000 Join Microsoft at RSAC 2025, where we will showcase end-to-end security designed to help organizations accelerate the secure adoption of AI.

The post ​​Join us for the end-to-end Microsoft RSAC 2025 Conference experience appeared first on Microsoft AI Blogs.

]]>
AI adoption is picking up speed. Many companies are growing their technology estates by embracing powerful new solutions like generative AI. But to maximize the benefits of new technology with confidence, security professionals need to stay compliant with the evolving regulatory and audit requirements in the age of AI. It is in this spirit that Microsoft invites you to join us at RSACTM 2025 Conference in San Francisco, where we will showcase end-to-end security designed to help organizations accelerate the secure adoption of AI with ready-to-go security and governance tools and solutions to multiply security teams’ productivity.

Across the Microsoft Security portfolio, our innovations, together with world-class threat and regulatory intelligence, will help give security experts the advantage they need in the era of AI. From our signature Pre-Day to hands-on demos and one-on-one meetings, join the Microsoft experience at RSAC 2025 designed just for you.

A group of men standing around a table with laptops

Microsoft at RSAC

From our signature Pre-Day to hands-on demos and one-on-one meetings, discover how Microsoft Security can give you the advantage you need in the era of AI.

Kick things off at Microsoft Pre-Day

The Microsoft experience at RSAC 2025 begins with Microsoft Pre-Day on Sunday, April 27, 2025, at the Palace Hotel, just around the corner from the Moscone Center. For the fourth year running, the keynote speech held on Microsoft Pre-Day will kick off the full lineup of Microsoft events and activities throughout RSAC 2025. By joining us on Sunday, you’ll have the chance to hear directly from Microsoft Security business leaders—including Vasu Jakkal, Corporate Vice President, Microsoft Security Business; Charlie Bell, Executive Vice President, Microsoft Security; Sherrod DeGrippo, Director of Threat Intelligence Strategy; and other Microsoft Security leaders as they share reporting on emerging cyberthreat trends and the product innovations designed to protect against them. Vasu will also take the RSAC 2025 stage on Day 1 for the conference keynote.

At Pre-Day, attendees will hear Microsoft Security threat intelligence on emerging trends, explore new AI-first tools, demos, and best practices, and attain a better understanding of how Microsoft can help them secure and govern their AI deployments. Attend to discover how the adaptive, end-to-end security platform from Microsoft, including Microsoft Security Copilot, can help your team catch what others miss, speed up remediation, lower your total cost of ownership, and boost—rather than burden—you and your teams.

Stick around after Pre-Day for the reception—an evening of fun, networking, and entertainment, celebrating the vibrant security community. This is a unique opportunity to meet Microsoft security leaders, expand your professional network, and learn how others are addressing the latest security trends and challenges. Light refreshments will be served. Chief information security officers (CISOs) who register to attend Microsoft Pre-Day will automatically be invited to a Microsoft Pre-Day Security Executive Dinner with Vasu Jakkal.  

Make sure to register for Microsoft Pre-Day to join in on all the day’s activities.

Dedicated calendar of events for CISOs

Microsoft will be hosting a number of events tailored to CISOs throughout RSAC 2025. To kick off the week, Microsoft will be hosting a Pre-Day, followed by the exclusive Microsoft Pre-Day Security Executive Dinner on April 27, 2025. Following, there will be daily lunch and learn opportunities that address some of the primary challenges facing CISOs organizations:

  • Monday April 28, 2025: Innovating Securely CISO LunchLearn insights concerning secure innovation centered around the new AI regulations, including the EU Act, Digital Operational Resilience Act (DORA), and more.
  • Tuesday April 29, 2025: SFI Executive Lunch—Open to all and focused around the needs of Latin America-based CISOs, this lunch will bring together leaders and experts interested in understanding the latest Secure Future Initiative (SFI) progress and exchanging their thoughts on related best practices.
  • Wednesday April 30, 2025: Embracing Cyber resilience CISO Lunch—Attendees are invited to network, learn, and exchange their insights regarding cyber resilience as the AI landscape evolves.

Finally, CISOs who attend RSAC 2025 are invited to stay through the end of the conference to attend the Microsoft Post-Day Forum at the Microsoft Experience Center at Silicon Valley on Thursday, May 1, 2025, from 9:00 AM PT to 1:00 PM PT. The day will be full of insightful presentations, interactive discussions, networking opportunities, and a curated CISO roundtable session. This informative day will also include an immersive tour of the unique state-of-the-art Microsoft Experience Center, which highlights larger-than-life solutions that show Microsoft’s cutting-edge technology solving many of today’s challenges. This experience is facilitated by envisioning specialists who spark inspired conversations, creative ideas, and new opportunities for leaders to participate in before returning home.

Sign up for Microsoft experiences at RSAC, including the Pre-Day, the Microsoft Pre-Day Security Executive Dinner, CISO lunch, and the Post-Day Forum. Request a one-on-one meeting with Microsoft experts to discuss your most pressing questions here.

Discover solutions to your challenges during the keynote speech and Microsoft sessions

Vasu Jakkal speaking at RSAC 2024.

As part of the RSAC agenda, Vasu Jakkal will take the stage on Monday, April 28, 2025, at 4:40 PM PT. During the speech, she will discuss the potential of agentic workflows to dramatically reshape the security landscape. Agentic AI has the power to enable more complex problem-solving, deeper agent collaboration, and iterative learning. All of this leads us toward a previously unheard-of new paradigm for security. Join Vasu Jakkal for an imaginative look at the future of AI security agents and how the people of our security teams will work alongside them to change the game.

​After the keynote and throughout the conference, attendees will be able to split their time between the Microsoft Security sessions included in the RSAC 2025 agenda, live demonstrations at booth #5744 in Moscone North, and a variety of roundtables, one-on-one meetings, and presentations at the Microsoft Security Hub at the Palace Hotel.

Here are two sessions not to miss:

  • Tuesday, April 29, 2025, at 9:40 AM PT: Shaping the Future of Security with Agentic AI​—In a time of rapidly evolving cyberthreats, agentic AI is emerging as a transformative force in security. Join Dorothy Li, Corporate Vice President of Microsoft Security Copilot and Marketplace, to discover how autonomous decision-making is reshaping our approach to cybersecurity. This session will reveal how agentic AI empowers organizations to proactively mitigate risks, enhance operational efficiency, and elevate the effectiveness of your security tools. Attendees will gain actionable insights and practical strategies for harnessing the potential of agentic AI. Prepare to rethink the future of security and position your organization at the forefront of innovation.​
  • Wednesday, April 30, 2025, at 9:40 AM PT: Accelerate AI Adoption with Stronger Security—AI adoption is accelerating, creating both new opportunities and security challenges. Led by Neta Haiby, Partner Product Manager at Microsoft​, this session covers key AI adoption trends, emerging risks, and common cyberthreats. Discover actionable steps to secure and govern AI, from establishing a dedicated security team for AI to adopting AI-specific solutions, ensuring your organization can innovate with confidence.​

Other well-known Microsoft experts will host session sharing what they’ve learned from their work pioneering and securing AI:

  • Wednesday, April 30, 2025 at 8:30 AM PT: Guardians of the Cyber Galaxy: Allies Against AI-Powered Cybercrime by Sean Farrell, Assistant General Counsel, Digital Crimes Unit.
  • Monday, April 28, 2025 at 1:10 PM PT: AI Era Authentication: Securing the Future with Inclusive Identity by Abhilasha Bhargav-Spantzel, Partner Security Architect, and Aditi Shah, Senior Data and Applied Scientist.
  • Tuesday, April 29, 2025, at 8:30 AM PT: AI Safety: Where Do We Go From Here? by Ram Shankar Siva Kumar, Principal Research Lead, AI Red Team Lead.
  • Tuesday, April 29, 2025, at 2:25 PM PT: Lessons Learned from a Year(ish) of Countering Malicious Actors’ Use of AI by Sherrod DeGrippo, Director, Threat intelligence strategy.

View live demonstrations and discover engaging ways to learn at booth #5744

A woman smiling at the Microsoft booth at RSAC 2024.

At the Microsoft booth, attendees will have the chance to engage with experts, discover ready-to-go security and governance tools built for generative AI, and watch theater sessions showcasing the latest products, innovations, and industry perspectives from Microsoft. They’ll also get to enjoy a fun and interactive gaming experience. 

Microsoft product and partner experts will be on hand to showcase the newest advancements through captivating demonstrations, informative videos, and valuable resources. 

Visit the Microsoft booth theater for exclusive 20-minute demos and expert-led sessions on the latest in security and AI. Explore strategies to protect, govern, and secure AI. Listen in to insights on identity, compliance, privacy, threat defense, data protection, and more. Don’t miss this opportunity to learn from industry leaders and stay ahead in the ever-evolving security landscape.

Meetings and connections at the Microsoft Security Hub

The historic and luxurious Palace Hotel is home base for Microsoft during the week. RSAC 2025 attendees are invited to meet with Microsoft experts and executives, attend thought leadership sessions and roundtable lunches, and join networking opportunities. Detailed information about individual sessions can be found on the Microsoft Security Experiences at RSAC 2025 Landing Page.

Customers are also invited to deepen their understanding of the latest cybersecurity threats, trends, and developments by discussing their most important security product and threat intelligence questions directly with Microsoft security experts through scheduled one-on-one meetings, held from Monday, April 28, 2025, to Wednesday, April 30, 2025, at the Palace Hotel. Request your meeting directly through the Microsoft Security Experiences at RSAC 2025 Home Page.

The Microsoft Intelligent Security Association (MISA) will once again have a considerable presence at RSAC 2025. MISA partners will be featured in the Microsoft Booth #5744 and included in other events happening throughout the week. Additionally, the sixth annual Microsoft Security Excellence Awards, presented by MISA, will be held at the Palace Hotel in San Francisco on April 28, 2025, celebrating our finalists and announcing winners in nine award categories as well as enjoying a time of connecting. 

Activities include:

  • MISA demo station: Stop by the Microsoft Booth to explore the innovative solutions developed by MISA members, which integrate Microsoft Security technology.
  • Theater sessions: Attend one or more of our five theater sessions at the Microsoft booth, led by MISA members, focusing on partner strategies and solutions for cyberthreat protection.
  • View the MISA demo and theater schedule.
  • MISA Partner awards: MISA members are invited to attend the Microsoft Security Excellence Awards on Monday, April 28, 2025, where winners will be announced in nine security award categories.

Get the most by staying through Microsoft Post-Day

Microsoft Post-Day Forum is a unique experience designed to help customers, CISOs, and security leaders dive deep into new concepts, ask questions they need answered about product features, and prepare to realize and enable the AI-first, end-to-end security concepts they’ve learned about throughout RSAC 2025. The Microsoft Post-Day Forum, hosted by Microsoft Security executives, will be held on Thursday, May 1, 2025, from 10:00 AM PT to 1:00 PM PT, at the Silicon Valley Experience Center. Pick up for the event will be held at the Palace Hotel at 8:00 AM PT, with drop off organized for 2:00 PM PT.

We look forward to seeing you at RSAC 2025!

Learn more about the Microsoft experience at RSAC 2025

Customers and partners can register for the events highlighted in this blog as well as other Microsoft ancillary events and more here.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post ​​Join us for the end-to-end Microsoft RSAC 2025 Conference experience appeared first on Microsoft AI Blogs.

]]>
Securing DeepSeek and other AI systems with Microsoft Security http://approjects.co.za/?big=en-us/security/blog/2025/02/13/securing-deepseek-and-other-ai-systems-with-microsoft-security/ Thu, 13 Feb 2025 17:00:00 +0000 Microsoft Security provides cyberthreat protection, posture management, data security, compliance and governance, and AI safety, to secure AI applications that you build and use. These capabilities can also be used to secure and govern AI apps built with the DeepSeek R1 model and the use of the DeepSeek app. 

The post Securing DeepSeek and other AI systems with Microsoft Security appeared first on Microsoft AI Blogs.

]]>
A successful AI transformation starts with a strong security foundation. With a rapid increase in AI development and adoption, organizations need visibility into their emerging AI apps and tools. Microsoft Security provides threat protection, posture management, data security, compliance, and governance to secure AI applications that you build and use. These capabilities can also be used to help enterprises secure and govern AI apps built with the DeepSeek R1 model and gain visibility and control over the use of the seperate DeepSeek consumer app. 

Secure and govern AI apps built with the DeepSeek R1 model on Azure AI Foundry and GitHub 

Develop with trustworthy AI 

Last week, we announced DeepSeek R1’s availability on Azure AI Foundry and GitHub, joining a diverse portfolio of more than 1,800 models.   

Customers today are building production-ready AI applications with Azure AI Foundry, while accounting for their varying security, safety, and privacy requirements. Similar to other models provided in Azure AI Foundry, DeepSeek R1 has undergone rigorous red teaming and safety evaluations, including automated assessments of model behavior and extensive security reviews to mitigate potential risks. Microsoft’s hosting safeguards for AI models are designed to keep customer data within Azure’s secure boundaries. 

azure AI content Safety


Learn more

With Azure AI Content Safety, built-in content filtering is available by default to help detect and block malicious, harmful, or ungrounded content, with opt-out options for flexibility. Additionally, the safety evaluation system allows customers to efficiently test their applications before deployment. These safeguards help Azure AI Foundry provide a secure, compliant, and responsible environment for enterprises to confidently build and deploy AI solutions. See Azure AI Foundry and GitHub for more details.

Start with Security Posture Management

Microsoft Defender for Cloud


Learn more

AI workloads introduce new cyberattack surfaces and vulnerabilities, especially when developers leverage open-source resources. Therefore, it’s critical to start with security posture management, to discover all AI inventories, such as models, orchestrators, grounding data sources, and the direct and indirect risks around these components. When developers build AI workloads with DeepSeek R1 or other AI models, Microsoft Defender for Cloud’s AI security posture management capabilities can help security teams gain visibility into AI workloads, discover AI cyberattack surfaces and vulnerabilities, detect cyberattack paths that can be exploited by bad actors, and get recommendations to proactively strengthen their security posture against cyberthreats.

AI security posture management in Defender for Cloud identifies an attack path to a DeepSeek R1 workload, where an Azure virtual machine is exposed to the Internet.
Figure 1. AI security posture management in Defender for Cloud detects an attack path to a DeepSeek R1 workload.

By mapping out AI workloads and synthesizing security insights such as identity risks, sensitive data, and internet exposure, Defender for Cloud continuously surfaces contextualized security issues and suggests risk-based security recommendations tailored to prioritize critical gaps across your AI workloads. Relevant security recommendations also appear within the Azure AI resource itself in the Azure portal. This provides developers or workload owners with direct access to recommendations and helps them remediate cyberthreats faster. 

Safeguard DeepSeek R1 AI workloads with cyberthreat protection

While having a strong security posture reduces the risk of cyberattacks, the complex and dynamic nature of AI requires active monitoring in runtime as well. No AI model is exempt from malicious activity and can be vulnerable to prompt injection cyberattacks and other cyberthreats. Monitoring the latest models is critical to ensuring your AI applications are protected.

Integrated with Azure AI Foundry, Defender for Cloud continuously monitors your DeepSeek AI applications for unusual and harmful activity, correlates findings, and enriches security alerts with supporting evidence. This provides your security operations center (SOC) analysts with alerts on active cyberthreats such as jailbreak cyberattacks, credential theft, and sensitive data leaks. For example, when a prompt injection cyberattack occurs, Azure AI Content Safety prompt shields can block it in real-time. The alert is then sent to Microsoft Defender for Cloud, where the incident is enriched with Microsoft Threat Intelligence, helping SOC analysts understand user behaviors with visibility into supporting evidence, such as IP address, model deployment details, and suspicious user prompts that triggered the alert. 

When a prompt injection attack occurs, Azure AI Content Safety prompt shields can detect and block it. The signal is then enriched by Microsoft Threat Intelligence, enabling security teams to conduct holistic investigations into the incident.
Figure 2. Microsoft Defender for Cloud integrates with Azure AI to detect and respond to prompt injection cyberattacks.

Additionally, these alerts integrate with Microsoft Defender XDR, allowing security teams to centralize AI workload alerts into correlated incidents to understand the full scope of a cyberattack, including malicious activities related to their generative AI applications. 

A jailbreak prompt injection attack on a Azure AI model deployment was flagged as an alert in Defender for Cloud.
Figure 3. A security alert for a prompt injection attack is flagged in Defender for Cloud

Secure and govern the use of the DeepSeek app

In addition to the DeepSeek R1 model, DeepSeek also provides a consumer app hosted on its local servers, where data collection and cybersecurity practices may not align with your organizational requirements, as is often the case with consumer-focused apps. This underscores the risks organizations face if employees and partners introduce unsanctioned AI apps leading to potential data leaks and policy violations. Microsoft Security provides capabilities to discover the use of third-party AI applications in your organization and provides controls for protecting and governing their use.

Secure and gain visibility into DeepSeek app usage 

Microsoft Defender for Cloud Apps


Learn more

Microsoft Defender for Cloud Apps provides ready-to-use risk assessments for more than 850 Generative AI apps, and the list of apps is updated continuously as new ones become popular. This means that you can discover the use of these Generative AI apps in your organization, including the DeepSeek app, assess their security, compliance, and legal risks, and set up controls accordingly. For example, for high-risk AI apps, security teams can tag them as unsanctioned apps and block user’s access to the apps outright.

Security teams can discover the usage of GenAI applications, assess risk factors, and tag high-risk apps as unsanctioned to block end users from accessing them.
Figure 4. Discover usage and control access to Generative AI applications based on their risk factors in Defender for Cloud Apps.

Comprehensive data security 

Data security


Learn more

In addition, Microsoft Purview Data Security Posture Management (DSPM) for AI provides visibility into data security and compliance risks, such as sensitive data in user prompts and non-compliant usage, and recommends controls to mitigate the risks. For example, the reports in DSPM for AI can offer insights on the type of sensitive data being pasted to Generative AI consumer apps, including the DeepSeek consumer app, so data security teams can create and fine-tune their data security policies to protect that data and prevent data leaks. 

In the report from Microsoft Purview Data Security Posture Management for AI, security teams can gain insights into sensitive data in user prompts and unethical use in AI interactions. These insights can be broken down by apps and departments.
Figure 5. Microsoft Purview Data Security Posture Management (DSPM) for AI enables security teams to gain visibility into data risks and get recommended actions to address them.

Prevent sensitive data leaks and exfiltration  

Microsoft Purview Data Loss Prevention


Learn more

The leakage of organizational data is among the top concerns for security leaders regarding AI usage, highlighting the importance for organizations to implement controls that prevent users from sharing sensitive information with external third-party AI applications.

Microsoft Purview Data Loss Prevention (DLP) enables you to prevent users from pasting sensitive data or uploading files containing sensitive content into Generative AI apps from supported browsers. Your DLP policy can also adapt to insider risk levels, applying stronger restrictions to users that are categorized as ‘elevated risk’ and less stringent restrictions for those categorized as ‘low-risk’. For example, elevated-risk users are restricted from pasting sensitive data into AI applications, while low-risk users can continue their productivity uninterrupted. By leveraging these capabilities, you can safeguard your sensitive data from potential risks from using external third-party AI applications. Security admins can then investigate these data security risks and perform insider risk investigations within Purview. These same data security risks are surfaced in Defender XDR for holistic investigations.

 When a user attempts to copy and paste sensitive data into the DeepSeek consumer AI application, they are blocked by the endpoint DLP policy.
Figure 6. Data Loss Prevention policy can block sensitive data from being pasted to third-party AI applications in supported browsers.

This is a quick overview of some of the capabilities to help you secure and govern AI apps that you build on Azure AI Foundry and GitHub, as well as AI apps that users in your organization use. We hope you find this useful!

To learn more and to get started with securing your AI apps, take a look at the additional resources below:  

Learn more with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity. 

The post Securing DeepSeek and other AI systems with Microsoft Security appeared first on Microsoft AI Blogs.

]]>
Unleashing the power of AI in India http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/ Thu, 06 Feb 2025 16:00:00 +0000 India has embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation.

The post Unleashing the power of AI in India appeared first on Microsoft AI Blogs.

]]>
This blog is part of the AI worldwide tour series, which highlights customers from around the globe who are embracing AI to achieve more. Read about how customers are using responsible AI to drive social impact and business transformation with Global AI innovation.

It’s no secret that India is well-positioned to be a global leader in the AI era, having embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation. Boasting a vast talent pool, proactive government initiatives, and a thriving startup ecosystem, India is uniquely equipped to leverage AI to solve pressing societal and business challenges and optimize operations across a wide array of civic and business verticals.

A long-standing partner in India’s technological growth, Microsoft has solidified its commitment with a US $3 billion investment to expand AI and Azure cloud infrastructure in the country. This initiative is designed to accelerate AI adoption across industries, empower businesses to integrate AI into critical processes, and nurture local talent to meet the evolving demands of the tech ecosystem. These efforts underscore Microsoft’s confidence in India’s position as a global leader in AI innovation and technological advancement.

AI business resources

Help your organization achieve its transformation goals

A decorative image of abstract art swirling in green, purple, and blue colors

Local ingenuity was on full display during the Microsoft AI Tour stop in Bengaluru and New Delhi, where organizations showcased how they are leveraging AI to tackle complex challenges, streamline workflows, and drive transformative efficiencies across industries.

MakeMyTrip powers the future of travel with AI

MakeMyTrip (MMT), India’s leading online travel company, is at the forefront of enhancing the travel shopping experience with generative AI. Over its 24-year journey, MMT has served more than 77 million users, offering comprehensive travel booking services. A standout feature powered by generative AI is Myra, their conversational bot. MMT is integrating an AI-powered workflow within Myra to assist users seamlessly at every stage of their travel journey—from pre-trip planning to in-trip support and post-trip follow-up. Built using large language models (LLMs) and orchestrated via Microsoft Azure AI Foundry, these services ensure smooth assistance throughout the travel process. As one of the early adopters of generative AI in travel tech, MMT is leading the next generation of travel experiences.

Persistent Systems improves contract management with AI-powered agent

Persistent Systems, one of the world’s fastest-growing digital engineering and enterprise modernization service providers, faced recurring challenges surrounding their contract management: inefficient workflows and lengthy negotiation cycles were causing bottlenecks in an otherwise agile organization. Persistent turned to the power of generative AI and Microsoft’s technology stack to reimagine their approach to contract management, developing ContractAssIst, an AI-powered agent built using generative AI and Microsoft 365 Copilot, to transform collaboration and streamline internal contract negotiations. Built to help ensure security and access controls, the tool helps to enhance collaboration, streamline workflows, and accelerate decision-making. 

As a result, ContractAssIst has reduced emails during negotiations by 95% and cut navigation and negotiation time by 70%, a task that currently takes approximately 20 to 25 minutes. Persistent has deployed Microsoft 365 Copilot to nearly 2,000 users and plans to extend it to a broader audience.

LTIMindtree unlocks data management with Microsoft 365 Copilot

LTIMindtree, a global technology consulting and digital solutions company with more than 84,000 employees in more than 30 countries, is leveraging AI in innovative ways to drive digital transformation and enhance business and IT operations. They have demonstrated how Microsoft 365 Copilot technology and AI agents are transforming their critical business functions, such as pre-sales, resource management, and cyber security. For example, custom built AI agents assist the resource management teams to quickly find the right employees with relevant skills and match them to specific projects; and help pre-sales and account managers create high-quality responses using historical data to incoming requests for proposals (RFPs) and requests for information (RFIs). They are also using Microsoft Security Copilot to create a unified command center for investigations, threat intelligence, and incident response, empowering them to build a next-gen Security Operations Center (SOC). As a result, LTIMindtree has seen a 30% increase in overall employee efficiency, with 20% less time spent on emails and day-to-day task allocation.

Streamlining health claims with ICICI Lombard’s AI-powered solution

ICICI Lombard, a leading private insurer in India, has developed an innovative solution to streamline health claims processing. Traditionally, claim adjudicators manually filed claims, a time-consuming process involving the review of 20 pages of documents. Leveraging Microsoft Azure OpenAI Service, Azure AI Document Intelligence, and Azure AI Vision OCR service, ICICI Lombard’s new solution extracts relevant information from these documents, providing adjudicators with a consolidated view of the diagnosis and treatment. This innovation has reduced the time required to process claims by more than 50%.

eSanjeevani transforms healthcare access with innovative AI solutions

eSanjeevani, India’s National Telemedicine Service by the Ministry of Health and Family Welfare, has integrated AI-enabled tools to enhance care quality and streamline teleconsultations, promoting equitable access to healthcare across the country. Powered by Azure, it offers secure, scalable, and accessible doctor-to-doctor and doctor-to-patient teleconsultations. eSanjeevani is advancing its AI journey with Microsoft AI, enhancing productivity, data analysis, and user experience. These innovations are helping eSanjeevani set new benchmarks in telemedicine and digital healthcare services. It is also developing a proof of concept with Microsoft Copilot to transcribe doctor-patient conversations in real time for advanced speech analytics, aiding data-driven decisions. Serving more than 330 million patients, 98% from rural areas, eSanjeevani is today the world’s largest telemedicine initiative in primary healthcare.

AI for everyone in India

Satya Nadella speaking at the Microsoft AI Tour stop in India.
India AI Tour keynote with Satya Nadella, Chief Executive Officer.

India’s AI journey is not just about innovation, it’s about transformation across industries and lives. From travel to healthcare, banking to engineering, the case studies showcased here demonstrate the immense potential of AI when paired with the right tools, partnerships, and vision. Microsoft’s investments and technologies have enabled organizations in India to tackle challenges, streamline processes, and unlock new levels of efficiency and growth. As India continues to lead in the global AI revolution, these examples serve as a testament to how AI can create meaningful impact, fostering a future where innovation drives progress for everyone.

Find the resources to support your AI journey

The post Unleashing the power of AI in India appeared first on Microsoft AI Blogs.

]]>
Hear from Microsoft Security experts at these top cybersecurity events in 2025 http://approjects.co.za/?big=en-us/security/blog/2025/02/03/hear-from-microsoft-security-experts-at-these-top-cybersecurity-events-in-2025/ Mon, 03 Feb 2025 17:00:00 +0000 Security events offer a valuable opportunity to learn about the latest trends and solutions, evolve your skills for cyberthreats, and meet like-minded security professionals. See where you can meet Microsoft Security in 2025.

The post Hear from Microsoft Security experts at these top cybersecurity events in 2025 appeared first on Microsoft AI Blogs.

]]>
Inspiration can spark in an instant when you’re at a conference. Perhaps you discover a new tool during a keynote that could save you hours of time. Or maybe a peer shares a story over coffee that makes you rethink an approach. One conversation, one session, or one event could give you fresh ideas, renewed excitement, and a vision for what to do next.

In the current AI landscape, inspiration and information are more important than ever for security professionals to stay ahead of threat actors. So if you’re looking to boost your skills and stay ahead of the threat landscape, join Microsoft Security at the top cybersecurity events in 2025.

Whether you join us at an industry staple like RSAC or one of our own events like Microsoft Secure, you can benefit in several key ways:

  • Get insights and strategies needed to overcome obstacles and drive your security initiatives forward with confidence.
  • See live demos of the latest products, product features, skills, and tools you can use in your work. Be among the first to hear about Microsoft Security innovations, such as Microsoft’s Secure Future Initiative and XSPA (cross-site port attack) updates attendees of Microsoft Ignite 2024 heard.
  • Learn from Microsoft Security experts on global threat intelligence.
  • Network with other like-minded security pros, learn best practices from your peers, and meet one-on-one with our experts.

Whatever your role, there’s an event for you and a path to successfully safeguarding your organization.

A group of men standing around a table with laptops

Microsoft at RSAC

From our signature Pre-Day to hands-on demos and one-on-one meetings, discover how Microsoft Security can give you the advantage you need in the era of AI.

Conferences to inspire and engage everyone

Large crowd of people attending Microsoft Ignite in Chicago, November 2024.

Security professionals of all levels can benefit from attending one of the biggest cybersecurity events, including RSAC, Black Hat, plus two premier Microsoft events—Microsoft Secure (virtual) and Microsoft Ignite (in-person and virtual). If you love being the first to hear about Microsoft product innovations, don’t miss these Microsoft events with insights every security professional can put to good use.

Microsoft Secure

Date: April 9, 2025
Location: Online only

Microsoft Secure is Microsoft’s cybersecurity conference. This year’s one-hour digital showcase will spotlight AI-first, end-to-end security innovations with clear use cases and customer stories of how they use our tools daily. Attendees will deep-dive into cybersecurity products and strategies along with thousands of other cybersecurity professionals.

RSAC

Dates: April 27-May 1, 2025
Location: San Francisco, CA

RSAC 2025 is a can’t-miss security conference, bringing together more than 40,000 security professionals to discuss the latest cybersecurity challenges and innovation with the best of the best. With the theme of “Many Voices. One Community,” RSAC will feature keynotes, track sessions, interactive sessions, networking opportunities, and an expo designed to foster advanced security strategies.

Throughout RSAC, Microsoft Security will showcase end-to-end security innovations and share world class threat and regulatory intelligence to give you the advantage you need in the era of AI. From our signature Pre-Day to hands-on demos and one-on-one meetings, discover how Microsoft Security can give you the advantage you need in the era of AI.​ Check out the full Microsoft at RSAC experience.

Black Hat

Dates: August 2-7, 2025
Location: Las Vegas, NV

The Black Hat Conference is a premier learning event in the cybersecurity industry, known for its in-depth technical sessions and cutting-edge research presentations on topics like critical infrastructure and information security research news.

Microsoft is a key sponsor of the conference each year, where we showcase our latest discoveries and AI research on real-world problems and solutions. Last year, our AI Red Teaming in Practice training sessions and our AI Summit roundtables were a hit. Black Hat is also known for its security community celebrations, including the Cybersecurity Woman of the Year Awards and the Researcher celebrations, which we take part in every year.

Microsoft Ignite

Dates: November 17-21, 2025
Location: San Francisco, CA, and online

Microsoft Ignite is Microsoft’s biggest annual conference for developers, IT professionals, business leaders, security professionals, and partners. Thousands of security professionals like you attend every year to hear the biggest security product announcements from Microsoft Security and gain training and skilling to prepare for future advancements in AI. Security professionals of all levels can join interactive labs, workshops, keynotes, technical breakout sessions, demos, and more, led by Microsoft Security leaders and experts.

Over the past few years, we’ve really boosted Microsoft Security experiences at Microsoft Ignite. Last year, we hosted the Microsoft Ignite Security Forum for security leaders and two workshops on AI red teaming and Microsoft 365 Copilot deployment. Plus, we hosted more than 30 sessions demoing new features to help you secure your environment, use your favorite Microsoft tools safely and securely, and make sure your organizational processes prioritize security first.

If you attend Microsoft Ignite in person this year, you won’t want to miss our Security Leaders Dinner or the security community party. If you’re not able to attend in person, you can register for our virtual event.​ Sign up to learn more.

Events for security leaders and decision-makers

A woman presenting during the Microsoft AI Tour.

Microsoft AI Tour

Dates: Through May 30, 2025
Location: Multiple worldwide

The Microsoft AI Tour is a free, one-day event for executives that explores the ways AI can drive growth and create lasting value in multiple cities around the globe. Whether you’re a functional decision-maker who evaluates investments, an IT team member charged with security, or a CISO revamping your security strategy, there will be valuable security content tailored to your needs.

Microsoft Security’s top business leaders attend AI tour locations worldwide to share with you how Microsoft Security Copilot lets you protect at the speed and scale of AI. They are also available to meet with you.

Event location Event date
Dubai, United Arab Emirates February 6, 2025
Singapore, Southeast Asia February 19, 2025
Tokyo, Japan February 26-27, 2025
London, United Kingdom March 5, 2025
Brussels, Belgium March 25, 2025
Seoul, South Korea March 26, 2025
Paris, France March 26, 2025
Madrid, Spain March 27, 2025
Tokyo, Japan March 27, 2025
Beijing, China April 23, 2025
Athens, Greece May 27-30, 2025

Gartner Security and Risk Management Summit

Dates: June 9-11, 2025
Location: National Harbor, MD

The Gartner Security and Risk Management Summit (Gartner SRM) explores trends in cybersecurity risk management, including the integration of generative AI, being an effective CISO, the importance of balancing response and recovery efforts with prevention, combating misinformation, and closing the cybersecurity skills gap to build a resilient workforce.

Microsoft Security executives host sessions at Gartner SRM to help you ensure the security of AI systems and adopt AI to drive innovation and efficiency. Our most popular topics center around securing and governing AI.

Events for technical and security practitioners

People attending the Microsoft booth at RSAC 2024.

Security teams look for conferences that provide specialized knowledge on the industry in which they work or on a narrow cybersecurity topic.

Legalweek

Dates: March 24-27, 2025
Location: New York, NY

Legalweek is a weeklong conference where approximately 6,000 members of the legal community will gather to network with their peers, explore emerging trends, spotlight the latest tech, and offer a roadmap through industry shifts. Topics explored at past Legalweek conferences include the ethical and regulatory impact of using your data to train AI, litigation in the age of cybersecurity, and maximizing efficiency and legal automation.  

This year, we’ll be sponsoring three sessions on AI and one on collaboration in complex litigation. As in years past, Microsoft is hosting an Executive Breakfast at Legalweek from 7:30 AM ET-8:45 AM ET on Tuesday, March 25, 2025. RSVP today and stop by Booth #3103 in New York Hilton Midtown Americas Hall 2 to learn more about the latest Microsoft Purview innovations. If you’d like to meet with our team while at Legalweek, sign up for a one-on-one meeting.

Identiverse

Dates: June 3-6, 2025
Location: Las Vegas, NV

Limiting access to AI, apps, and resources to those with the proper permissions is a crucial part of security. The Identiverse conference provides education, collaboration, and insight into the future of identity security. More than 2,500 attendees will share insights, develop new ideas, and advance the state of modern digital identity and security.

The event features sessions on best practices, industry trends, and latest technologies; an exhibition hall to showcase the latest identity solution innovations; and networking opportunities. Microsoft will host a booth where attendees can connect with Microsoft Security experts and leaders.

Events for developers

The cybersecurity talent shortage is requiring many to step up even if cybersecurity isn’t in their official job description. If you are an IT professional being tasked with cybersecurity or someone with an eagerness to learn cybersecurity tactics, join our Microsoft events aimed at helping you uplevel your cybersecurity skills.

Microsoft Build

Dates: May 19-22, 2025
Location: Seattle, WA

Security is a team sport and developers are increasingly the first string team members who build security into the development of applications. Microsoft Build Conference 2025 is Microsoft’s developer-focused event. It will showcase exciting updates and innovations from Microsoft Security for developers to create AI-enabled security solutions for their organizations.

The event includes connection opportunities, demos, and security-focused sessions. Past topics have included using AI to accelerate development processes, tools for enhancing the developer experience, and strategies for building in the cloud. Stay up to date on Microsoft Build news and find out when registration is open.

Find your inspiration at an event this year

Cybersecurity events foster a culture of continuous learning and adaptation, empowering you to stay ahead of emerging cyberthreats and maintain a resilient security posture. The ideas will flow freely at these events. Whether you attend one of the biggest conferences of the year or a smaller event (or both), you’ll be in good company. Microsoft Security will be there be, too, excited to share and eager to learn.

Hope to see you at a future event!

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Hear from Microsoft Security experts at these top cybersecurity events in 2025 appeared first on Microsoft AI Blogs.

]]>
3 priorities for adopting proactive identity and access security in 2025 http://approjects.co.za/?big=en-us/security/blog/2025/01/28/3-priorities-for-adopting-proactive-identity-and-access-security-in-2025/ Tue, 28 Jan 2025 17:00:00 +0000 Adopting proactive defensive measures is the only way to get ahead of determined efforts to compromise identities and gain access to your environment.

The post 3 priorities for adopting proactive identity and access security in 2025 appeared first on Microsoft AI Blogs.

]]>
If 2024 taught us anything, it’s that a proactive, no-compromises approach to security is essential for 2025 and beyond.

Nation-states and advanced cybercriminals are making significant investments in infrastructure and automation to intensify familiar cyberattack patterns; password attacks, for example, escalated from 579 incidents per second in 20211 to 7,000 in 2024.2 These groups are also adopting emerging technologies such as AI to create deepfakes and personalized spear-phishing campaigns that manipulate people into granting unauthorized access.

Adopting proactive defensive measures is the only way to get ahead of such determined efforts to compromise identities and gain access to your environment.

Microsoft is strengthening our own defenses through the Secure Future Initiative (SFI), a multiyear commitment to advance the way we design, build, test, and operate Microsoft technology to ensure it meets the highest possible standards for security. One of our first steps was to conduct a full inventory of our environment and do a thorough “spring cleaning,” deleting 730,000 outdated and non-compliant apps and removing 5.75 million unused or outdated Microsoft Entra ID systems from production and test areas.3 As part of this process, we deeply examined identity and network access controls, addressed top risks, implemented standard practices, and improved our incident response.

We learned from talking with our largest customers that many are dealing with the exact same issues; they’re also assessing their environments to surface potential vulnerabilities and strengthen their defenses. Based on these learnings and on the evolving behavior of threat actors, we’ve identified three priorities for enhancing identity and access security measures for 2025:

  1. Start secure, stay secure, and prepare for new cyberthreats.
  2. Extend Zero Trust access controls to all resources.
  3. Use generative AI to tip the scales in favor of defenders.

1. Start secure, stay secure, and prepare for new cyberthreats

Many organizations struggle to eliminate technical and security debt while continuing to add new users, resources, and applications. While more of our customers are implementing basic identity security measures, such as multifactor authentication, they may still not enforce them everywhere. Moreover, basic measures aren’t enough to protect against advanced identity attacks such as token theft4 or adversary-in-the-middle phishing.5

It’s essential to understand your entire attack surface, identify all potential entry points, and proactively apply access security that closes any gaps.

Traditional security approaches deploy security tools and measures “as needed.” Unfortunately, the additive approach of starting at 100% open and then dialing up defenses leaves holes that bad actors can exploit and use as launching pads for lateral movement. Reactive security isn’t enough to safeguard your environment. Our guidance for 2025 is to always start at the highest level of security (Secure by Default), then dial back as needed for compatibility or other reasons. It’s also critical to protect all identities: employees, contractors, partners, customers, and, most importantly, machine, service, and AI identities.

Security defaults in Microsoft Entra ID


Learn more

To encourage Secure by Default practices with customers, Microsoft last year mandated the use of multifactor authentication across the Microsoft Azure portal, Microsoft Entra admin center, and Microsoft Intune admin center. To complement security defaults, we started rolling out Microsoft-managed Conditional Access policies for all new tenants to ensure you benefit from baseline risk-based security policies that are pre-configured and turned on by default.6 Tenants that retain security defaults experience 80% fewer compromised accounts than unprotected tenants, while compromise rates have fallen by 20.5% for Microsoft Entra ID Premium tenants with Microsoft-managed policies enabled.6

Outlined below are practical measures that any security leader can implement to improve hygiene and safeguard identities within their organization:

  • Implement multifactor authentication: Prioritize phishing-resistant authentication methods like passkeys, which are considered the most secure option currently available. Require multifactor authentication for all applications, including private and legacy ones. Also consider using high-assurance credentials like digital employee IDs with facial matching for workflows such as new employee onboarding and password resets.
  • Employ risk-based Conditional Access policies and continuous access evaluation: Configure strong Conditional Access policies that initiate additional security measures, such as step-up authentication, automatically for high-risk sign-ins. Allow only just-enough access, and ideally just-in-time access, to critical resources. Augment Conditional Access with continuous access evaluation to ensure ongoing access checks and to protect against token theft.
  • Discover and manage shadow IT: Detect unauthorized apps (also known as shadow IT) and tenants, so you can control access to them. Shadow IT often lacks essential security controls that organizations enforce and manage to prevent compromise. Shadow tenants, often created for development and testing, may lack sufficient security policies and controls. Establish standard processes for creating new tenants that are secure by default and then safely retiring them when they’re no longer needed.
  • Secure access for non-human identities: Start by taking an inventory of your workload identities. Replace secrets, credentials, certificates, and keys with more secure authentication, such as managed identities for Azure resources. Implement least privilege and just-in-time access coupled with granular Conditional Access policies for workload identities.  

To get started: Explore Microsoft Entra ID capabilities for multifactor authentication, Conditional Access, continuous access evaluation, and Microsoft Entra ID Protection. Confirm that security defaults or Microsoft-managed Conditional Access Policies are enabled on all your tenants and obtain guidance on the phishing-resistant authentication methods available in Microsoft Entra ID, including passkeys. Use Microsoft Defender for Cloud Apps to discover and manage shadow IT in your Microsoft network. Adopt managed identities for Azure and workload identity federation, and strengthen access controls for non-human identities with Microsoft Entra Workload ID.

2. Extend Zero Trust access controls to all resources

It’s essential to have visibility, control, and governance over who and what has access to your environment, what they’re trying to do, and why. The goal is to enable flexible work while protecting against escalating cyberthreats. This requires extending Zero Trust access controls to every resource and entry point, including legacy on-premises applications and services, legacy devices and infrastructure, and any internet destinations. Consider how you can reduce effort and errors using automation, while also making it easier for security teams to share insights and collaborate.

Outlined below are key strategies for extending Zero Trust access controls to all resources.

  • Unify your access policy engines across all users, applications, endpoints, and networks to simplify your Zero Trust architecture. Converge access policies for identity security tools and network security tools to eliminate coverage gaps and enforce more robust access controls.
  • Extend modern access controls to all apps and internet resources: Use modern network security tools like Secure Access Service Edge to extend strong authentication, Conditional Access, and continuous access evaluation to legacy on-premises apps, shadow IT apps, and any internet destination. Retire your outdated VPN and configure granular per-app access policies to prevent lateral movement inside your network.
  • Enforce least privilege access: Automate your identity and access lifecycle to ensure that all users only have necessary access as they join your organization and change jobs, and that their access is revoked as soon as they leave. Use cloud human resources systems as a source of authority in join-move-leave workflows to enforce real-time access changes. Eliminate standing privileges and require just-in-time access for sensitive workloads and data. Regularly review access permissions to help prevent lateral movement in case of a user identity compromise.

To get started: Explore the Microsoft Entra Suite to secure user access and simplify Zero Trust deployments. Use entitlement management and lifecycle workflows to automate identity and access lifecycle processes. Use Microsoft Entra Private Access to replace legacy VPN with modern access controls, and use Microsoft Entra Internet Access to extend Conditional Access and conditional access evaluation to any resource, including shadow IT apps and internet destinations. Use Microsoft Entra Workload ID to secure access for non-human identities.

3. Use generative AI to tip the scales in favor of defenders

Generative AI is indispensable for staying ahead of cyberthreats in 2025. It helps defenders identify policy gaps, detect risks, and automate processes to strengthen security practices and defend against threats. A recent study found that within three months, organizations using Microsoft Security Copilot experienced a 30.13% reduction in average time to resolve security incidents.7 For identity teams, the impact is even more pronounced. IT admins using Copilot in the Microsoft Entra admin center spent 45.41% less time troubleshooting sign-ins, and increased accuracy by 46.88%.8

Outlined below are opportunities available to transform the daily work of identity professionals with generative AI:

  • Enhance risky user investigations: Investigate identity compromises faster with AI-powered recommendations for proactive mitigation and defense. Use natural language conversations to investigate risky users and to gain insights into elevated risk levels and risky sign-ins.
  • Troubleshoot sign-ins: Use natural language conversations to uncover root causes of sign-in failures, interruptions, or multifactor authentication prompts. Automate troubleshooting tasks and let AI discover actionable insights across user details, group details, sign-in logs, audit logs, and diagnostic logs.
  • Mitigate app risks: Use intuitive prompts to manage and remediate application risks as well as gain detailed insights into permissions, workload identities, and cyberthreats.

At Microsoft Ignite 2024, we announced the preview of Security Copilot embedded directly into the Microsoft Entra admin center that included new skills to empower identity professionals and security analysts. We’re committed to enhancing Security Copilot to help identity and network security professionals collaborate effectively, respond more swiftly, and get ahead of emerging threats. We encourage you to participate in shaping these tools as we develop them.

To get started: Learn more about getting started with Microsoft Security Copilot.

Our commitment to supporting proactive security measures

By investing in proactive measures in 2025, you can significantly improve your security hygiene and operational resilience. To help you strengthen your defenses, we’re committed to innovating ahead of malicious actors, simplifying security to reduce the burden on security teams, and sharing everything we learn from protecting Microsoft and our customers.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1The passwordless future is here for your Microsoft account, Vasu Jakkal. September 15, 2021.

2Microsoft Digital Defense Report 2024.

3Secure Future Initiative: September 2024 Progress Report, Microsoft.

4How to break the token theft cyber-attack chain, Alex Weinert. June 20, 2024.

5Defeating Adversary-in-the-Middle phishing attacks, Alex Weinert. November 18, 2024.

6Automatic Conditional Access policies in Microsoft Entra streamline identity protection, Alex Weinert. November 3, 2023.

7Generative AI and Security Operations Center Productivity: Evidence from Live Operations, Microsoft. November 2024.

8Randomized Controlled Trials for Security Copilot for IT Administrators, Microsoft. November 2024.

The post 3 priorities for adopting proactive identity and access security in 2025 appeared first on Microsoft AI Blogs.

]]>