Security | The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/tag/security/ Build the future of your business with AI Sat, 11 Apr 2026 20:18:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/wp-content/uploads/2026/04/cropped-favicon-32x32.png Security | The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/tag/security/ 32 32 Navigating digital sovereignty at the frontier of transformation http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/25/navigating-digital-sovereignty-at-the-frontier-of-transformation/ Wed, 25 Mar 2026 07:00:00 +0000 Digital sovereignty has become a practical leadership discipline grounded in risk management, continuity planning, and long-term accountability.

The post Navigating digital sovereignty at the frontier of transformation appeared first on The Microsoft Cloud Blog.

]]>
Digital sovereignty is no longer a theoretical debate or a narrow compliance exercise. For leaders across governments, regulated industries, and critical infrastructure sectors, it has become a practical leadership discipline grounded in risk management, continuity planning, and long-term accountability.

Over the past several years, we have seen customer concerns evolve materially. Early conversations focused primarily on privacy and lawful data handling. Today, those concerns have expanded. Leaders are now asking how they maintain operational continuity during disruption, how they adopt AI responsibly without losing control, and how they protect national, organizational, and customer interests in an increasingly volatile global environment.

These questions are not abstract. They surface in boardrooms, procurement decisions, architecture reviews, and crisis simulations. They reflect a broader shift in how trust is evaluated in digital systems. Today in Brussels we brought together attendees from around the world—policy makers, IT leaders, and enterprises—to approach these questions from the multiplicity of perspectives to move the conversation from headlines to action.

From privacy to resilience and beyond

Privacy remains foundational. But it is no longer the sole lens through which sovereignty is assessed.

Customers are increasingly concerned about business continuity in the face of cyber incidents, geopolitical tension, supply chain disruption, and network instability. They want to understand how critical workloads operate if connectivity is constrained, if dependencies fail, or if policy conditions change with little warning.

At the same time, innovation pressures have intensified. AI is becoming central to public service delivery, national competitiveness, and economic growth. Organizations cannot afford to pause progress while sovereignty questions are debated in isolation. They need approaches that allow them to move forward responsibly, balancing opportunity with control.

What we hear consistently is this: sovereignty concerns will continue to evolve. Any approach that treats them as static is already behind.

For four decades, Microsoft has operated under some of the world’s most demanding data protection, competition, and digital governance frameworks. Working closely with European institutions, regulators, and customers has shaped how we think about sovereignty—not as a regional exception, but as a discipline that must function at scale, under scrutiny, and over time. That experience matters because many of the sovereignty questions now emerging globally were first tested in Europe, long before they became mainstream elsewhere.

A consultative approach to risk management

This is why we believe digital sovereignty must be approached as consultative risk management, not a checkbox or a predefined deployment model.

Every organization faces a unique mix of regulatory obligations, cyber risk, operational exposure, and innovation goals. Even within a single institution, sovereignty requirements differ by workload. Some demand strict isolation and local control. Others require global scale, advanced security capabilities, and rapid innovation.

Our role is to help customers navigate these tradeoffs deliberately. That means working with them to assess risk, align architecture to policy realities, and design environments that reflect both today’s constraints and tomorrow’s unknowns.

This work sits at the intersection of cybersecurity, compliance, resilience, and frontier transformation. It requires ongoing engagement, transparency, and the willingness to adapt as conditions change.

Digital sovereignty posture in practice

A digital sovereignty posture that is flexible recognizes that no single approach can address every requirement. Instead, it focuses on giving organizations options, visibility, and control across a continuum of environments.

Customers operating in public cloud environments expect clear data residency options, strong encryption and access controls, and visible operational discipline. Just as important, they look for transparency into how cloud systems are governed and how exceptional situations are managed, particularly as regulatory scrutiny increases.

Those expectations do not disappear when workloads move closer to the edge. In fact, they intensify. For workloads that require greater isolation, local processing, or operation in constrained environments, hybrid and disconnected solutions become essential. In February, Microsoft announced the expansion of disconnected operations, enabling customers to run critical workloads in air-gapped environments while retaining consistent governance and operational control. This capability extends cloud-based practices into disconnected settings, supporting operational continuity without abandoning security and innovation. 

That commitment shows up in concrete safeguards that customers can independently evaluate and apply. The EU Data Boundary is one example, supporting data storage and processing within the EU and European Free Trade Association (EFTA) regions for cloud services, alongside longstanding investments in encryption, access controls, auditability, and operational transparency. These measures provide practical mechanisms for aligning cloud operations with regulatory and risk requirements, rather than relying on abstract assurances. 

At the same time, we are expanding options across hybrid and private cloud environments to support continuity, resilience, and local control where required. These investments reflect a simple reality: customer needs are not converging toward one model. They are diversifying.

Underpinning all of this are Microsoft’s digital commitments, which frame how we approach privacy, security, transparency, and responsible AI. These commitments are not marketing statements. They guide how systems are built, operated, and governed, and they provide a foundation for long-term accountability.

Practical guidance for leaders navigating sovereignty

As digital sovereignty becomes embedded in policy and procurement decisions, leaders benefit from a practical lens. Based on what we hear from customers and stakeholders, there are a few consistent themes shaping successful approaches:

  • Sovereignty requirements will continue to expand beyond privacy to include continuity, resilience, and AI governance.
  • Risk management is now inseparable from digital transformation strategy.
  • Flexibility and optionality matter more than rigid architectures.
  • Transparency and accountability are as important as technical capability.
  • Sovereignty posture must consider protections against cyberthreats.

Addressing these realities requires partners who understand the full scope of the challenge and are willing to engage over the long term. It requires platforms and collaboration designed with sovereignty in mind from the start.

So what does this mean for you?

Digital sovereignty is not a destination. It is an ongoing discipline shaped by changing technology, regulation, and global conditions.

At Microsoft, we approach this work with humility and responsibility. We recognize that customer concerns will continue to evolve, and that our own platforms and practices must evolve with them. We remain committed to expanding our sovereign cloud continuum, strengthening our cloud capabilities, and delivering solutions that balance innovation with control.

Most importantly, we remain focused on delivery. Because in moments of uncertainty, what matters most is not what technology promises, but what it allows organizations to do with confidence.

Where does digital sovereignty go from here?

The future of digital sovereignty will be defined by implementation, not rhetoric. Success will depend on collaboration between governments, industry, and civil society, as well as a shared commitment to transparency and continuous improvement.

As we look ahead, our focus remains on helping organizations turn sovereignty principles into durable, scalable outcomes. That means continuing to invest in capabilities that support trust, engaging constructively with policymakers, and listening closely to the evolving needs of our customers.

Digital trust is built over time, through consistent action and openness, and that trust is one of the most important foundations we can help create.

The post Navigating digital sovereignty at the frontier of transformation appeared first on The Microsoft Cloud Blog.

]]>
80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/ Tue, 17 Feb 2026 15:45:00 +0000 Read Microsoft’s new Cyber Pulse report for straightforward, practical insights and guidance on new cybersecurity risks.

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on The Microsoft Cloud Blog.

]]>
Today, Microsoft is releasing the new Cyber Pulse report to provide leaders with straightforward, practical insights and guidance on new cybersecurity risks. One of today’s most pressing concerns is the governance of AI and autonomous agents. AI agents are scaling faster than some companies can see them—and that visibility gap is a business risk.1 Like people, AI agents require protection through strong observability, governance, and security using Zero Trust principles. As the report highlights, organizations that succeed in the next phase of AI adoption will be those that move with speed and bring business, IT, security, and developer teams together to observe, govern, and secure their AI transformation.

Read the latest Cyber Pulse report

Agent building isn’t limited to technical roles; today, employees in various positions create and use agents in daily work. More than 80% of Fortune 500 companies today use AI active agents built with low-code/no-code tools.2 AI is ubiquitous in many operations, and generative AI-powered agents are embedded in workflows across sales, finance, security, customer service, and product innovation. 

With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place. AI agents should be held to the same standards as employees or service accounts. That means applying long‑standing Zero Trust security principles consistently:

  • Least privilege access: Give every user, AI agent, or system only what they need—no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, risk level.
  • Assume compromise can occur: Design systems expecting that cyberattackers will get inside.

These principles are not new, and many security teams have implemented Zero Trust principles in their organization. What’s new is their application to non‑human users operating at scale and speed. Organizations that embed these controls within their deployment of AI agents from the beginning will be able to move faster, building trust in AI.

The rise of human-led AI agents

The growth of AI agents expands across many regions around the world from the Americas to Europe, Middle East, and Africa (EMEA), and Asia.

A graph showing the percentages of the regions around the world using AI agents.

According to Cyber Pulse, leading industries such as software and technology (16%), manufacturing (13%), financial institutions (11%), and retail (9%) are using agents to support increasingly complex tasks—drafting proposals, analyzing financial data, triaging security alerts, automating repetitive processes, and surfacing insights at machine speed.3 These agents can operate in assistive modes, responding to user prompts, or autonomously, executing tasks with minimal human intervention.

A graphic showing the percentage of industries using agents to support complex tasks.
Source: Industry Agent Metrics were created using Microsoft first-party telemetry measuring agents build with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

And unlike traditional software, agents are dynamic. They act. They decide. They access data. And increasingly, they interact with other agents.

That changes the risk profile fundamentally.

The blind spot: Agent growth without observability, governance, and security

Despite the rapid adoption of AI agents, many organizations struggle to answer some basic questions:

  • How many agents are running across the enterprise?
  • Who owns them?
  • What data do they touch?
  • Which agents are sanctioned—and which are not?

This is not a hypothetical concern. Shadow IT has existed for decades, but shadow AI introduces new dimensions of risk. Agents can inherit permissions, access sensitive information, and generate outputs at scale—sometimes outside the visibility of IT and security teams. Bad actors might exploit agents’ access and privileges, turning them into unintended double agents. Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability. When leaders lack observability in their AI ecosystem, risk accumulates silently.

According to the Cyber Pulse report, already 29% of employees have turned to unsanctioned AI agents for work tasks.4 This disparity is noteworthy, as it indicates that numerous organizations are deploying AI capabilities and agents prior to establishing appropriate controls for access management, data protection, compliance, and accountability. In regulated sectors such as financial services, healthcare, and the public sector, this gap can have particularly significant consequences.

Why observability comes first

You can’t protect what you can’t see, and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organization (IT, security, developers, and AI teams) to understand:  

  • What agents exist 
  • Who owns them 
  • What systems and data they touch 
  • How they behave 

In the Cyber Pulse report, we outline five core capabilities that organizations need to establish for true observability and governance of AI agents:

  • Registry: A centralized registry acts as a single source of truth for all agents across the organization—sanctioned, third‑party, and emerging shadow agents. This inventory helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications. Least‑privilege permissions, enforced consistently, help ensure agents can access only the data, systems, and workflows required to fulfill their purpose—no more, no less.
  • Visualization: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behavior and impact—supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external cyberthreats. Security signals, policy enforcement, and integrated tooling help organizations detect compromised or misaligned agents early and respond quickly—before issues escalate into business, regulatory, or reputational harm.

Governance and security are not the same—and both matter

One important clarification emerging from Cyber Pulse is this: governance and security are related, but not interchangeable.

  • Governance defines ownership, accountability, policy, and oversight.
  • Security enforces controls, protects access, and detects cyberthreats.

Both are required. And neither can succeed in isolation.

AI governance cannot live solely within IT, and AI security cannot be delegated only to chief information security officers (CISOs). This is a cross functional responsibility, spanning legal, compliance, human resources, data science, business leadership, and the board.

When AI risk is treated as a core enterprise risk—alongside financial, operational, and regulatory risk—organizations are better positioned to move quickly and safely.

Strong security and governance do more than reduce risk—they enable transparency. And transparency is fast becoming a competitive advantage.

From risk management to competitive advantage

This is an exciting time for leading Frontier Firms. Many organizations are already using this moment to modernize governance, reduce overshared data, and establish security controls that allow safe use. They are proving that security and innovation are not opposing forces; they are reinforcing ones. Security is a catalyst for innovation.

According to the Cyber Pulse report, the leaders who act now will mitigate risk, unlock faster innovation, protect customer trust, and build resilience into the very fabric of their AI-powered enterprises. The future belongs to organizations that innovate at machine speed and observe, govern and secure with the same precision. If we get this right, and I know we will, AI becomes more than a breakthrough in technology—it becomes a breakthrough in human ambition.

Get the full Cyber Pulse report

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Data Security Index 2026: Unifying Data Protection and AI Innovation, Microsoft Security, 2026.

2Based on Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

3Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

4July 2025 multi-national survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Group.

Methodology:

Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the past 28 days of November 2025. 

2026 Data Security Index: 

A 25-minute multinational online survey was conducted from July 16 to August 11, 2025, among 1,725 data security leaders. 

Questions centered around the data security landscape, data security incidents, securing employee use of generative AI, and the use of generative AI in data security programs to highlight comparisons to 2024. 

One-hour in-depth interviews were conducted with 10 data security leaders in the United States and United Kingdom to garner stories about how they are approaching data security in their organizations. 

Definitions: 

Active Agents are 1) deployed to production and 2) have some “real activity” associated with them in the past 28 days.  

“Real activity” is defined as 1+ engagement with a user (assistive agents) OR 1+ autonomous runs (autonomous agents).  

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on The Microsoft Cloud Blog.

]]>
From awareness to action: Building a security-first culture for the agentic AI era http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/12/10/from-awareness-to-action-building-a-security-first-culture-for-the-agentic-ai-era/ Wed, 10 Dec 2025 16:00:00 +0000 Microsoft helps leaders secure AI adoption with governance, training, and culture—turning cybersecurity into a growth and trust accelerator.

The post From awareness to action: Building a security-first culture for the agentic AI era appeared first on The Microsoft Cloud Blog.

]]>
The insights gained from Cybersecurity Awareness Month, right through to Microsoft Ignite 2025, demonstrate that security remains a top priority for business leaders. It serves as a strategic lever for organizational growth, fosters trust, and facilitates the advancement of AI innovation. The Work Trend Index 2025 indicates that over 80% of leaders are currently utilizing agents or plan to do so within the next 12 to 18 months. While AI introduces risks such as oversharing, data leakage, compliance gaps, and agent sprawl, business and security leaders can address these issues in part by: 

  1. Preparing for the integration of AI and agents.
  2. Strengthening training so that everyone has the necessary skills. 
  3. Fostering a culture that prioritizes cybersecurity. 

Preparing for the integration of AI and intelligent agents

Preparing for AI and agent integration calls for careful strategy, thoughtful business planning, and organization-wide adoption under solid governance, security, and management. Microsoft’s AI adoption model offers a step-by-step guide for businesses embarking on this journey and the guide offers actionable insights and solutions to manage AI risks.

Strengthening training so that everyone has the necessary skills

Technology alone isn’t enough. People are your strongest defense—and the foundation of trust. That’s why skilling emerged as a central theme throughout these past months and will continue beyond. Frontier Firms—those structured around on-demand intelligence and powered by “hybrid” teams of humans plus agents—lead by fostering a culture of continuous learning. Our blog “Building human-centric security skills for AI” offers insights and guidance you can apply in your organization.  

  • Lean into your unique human strengths: Your team’s judgment, creativity, and experience are irreplaceable. Take time to invest in upskilling and reskilling them, so they can confidently guide and manage AI tools responsibly and securely. Explore Microsoft Learn for Organizations for resources to support your learning journey.
  • Stay curious and agile through continuous learning: Building security resilience is an ongoing process. Regularly refresh your AI and security training, offer time and resources for employees to explore new skills, and create a supportive, engaging environment that motivates continuous growth. Find in AI Skills Navigator, our agentic learning space, AI and security training tailored to different roles.  

Investing in skilling doesn’t just reduce risk—it accelerates innovation by giving teams the confidence to explore new AI capabilities securely. 

Skilling is an ongoing practice that needs to constantly evolve alongside the business and technology landscape. Staying ahead requires an enterprise-wide strategy that aligns ever-changing business priorities with always-on skill-building. 

—Jeana Jorgensen, Corporate Vice President, Microsoft Learning

Fostering a culture that prioritizes security

As AI impacts everyone’s role, make security awareness and responsible AI practices shared priorities. Encourage your team to weave security thinking into their daily routines—creating a safer environment for all. As Vasu Jakkal, Corporate Vice President of Microsoft Security highlighted in her blog “Cybersecurity Awareness Month: Security starts with you,” it is critical that security become part of your organization’s culture and norms. 

Check out our new e-book, Skilling for Secure AI: How Frontier Firms Lead the Way for practical steps for leaders to upskill their workforce in identity management, data governance, and responsible AI practices.

From awareness to action

In the agentic AI era, people continue to be our most valuable resource. It’s essential to empower them with AI and equip them with the skills they need to use AI responsibly and securely. Cybersecurity awareness should go beyond designated months or campaigns; true awareness means taking meaningful action.   

Here are three actions you can take today to maximize your AI investments: 

  1. Share the Be Cybersmart Kit with your employees. It includes tips for protecting yourself from fraud and deepfakes, guidance on safe AI usage, and key security best practices.
  2. Invest in people: Focus on upskilling initiatives that support your AI transformation, cloud modernization, and security-first strategies.
  3. Champion a security-first culture: Ensure cybersecurity is integral to every business discussion and woven into your overall strategy. 

Microsoft guide for securing the AI-powered enterprise

The post From awareness to action: Building a security-first culture for the agentic AI era appeared first on The Microsoft Cloud Blog.

]]>
Cybersecurity Awareness Month: Security starts with you http://approjects.co.za/?big=en-us/security/blog/2025/10/01/cybersecurity-awareness-month-security-starts-with-you/ Wed, 01 Oct 2025 16:00:00 +0000 Make the most out of Cybersecurity Awareness Month with resources from Microsoft.

The post Cybersecurity Awareness Month: Security starts with you appeared first on The Microsoft Cloud Blog.

]]>
At Microsoft, security is our number one priority, and we believe that cybersecurity is as much about people as it is about technology. As we move into October and kick off Cybersecurity Awareness Month, this time of year really makes me think about how important online safety is—not just at work, but for my family and friends too. I often find myself sharing tips with loved ones on how to stay safe online, because building strong security habits and keeping them top of mind has become a key part of how I protect myself and those around me.

Explore Microsoft Cybersecurity Awareness resources

As part of the Microsoft Secure Future Initiative (SFI), we have committed to embed security into every layer of our technology, culture, and governance—placing security above all else. Since its launch in November 2023, SFI has mobilized the equivalent of more than 34,000 engineers to proactively reduce risk and strengthen security across Microsoft and the products and services we offer our customers. A great example of this is mitigating advanced multifactor authentication attacks, where phishing-resistant multifactor authentication now protects 100% of production system accounts and 92% of employee productivity accounts. In addition, we continue to reduce the risk of compromise during new employee setup by enforcing video-based verification, now at 99%.1

Enabling your security-first approach

This year, we have also developed new resources and tools to support security professionals in keeping their organizations secure, particularly as we enter this next era of AI. Building upon our learnings with SFI, we have created SFI patterns and practices, which is a new library of actionable guidance designed to help organizations implement security at scale.

In addition to best practices for security professionals, we continue to add articles to our Be Cybersmart Kit, which is a great starting point for security professionals that need to educate their organizations on how to be safe. The Be Cybersmart Kit contains articles on AI safety, device security, domain impersonation, fraud, secure sign-in, and phishing. The kit is just one of the many resources available on the Microsoft Cybersecurity Awareness site

Be Cybersmart

Help educate everyone in your organization with cybersecurity awareness resources and training curated by the security experts at Microsoft.

Get the Be Cybersmart Kit.

Those seeking more in-depth resources can access expert-level learning paths, certifications, and technical documentation to continue their cybersecurity education. And for students pursuing the field of cybersecurity, the Microsoft Cybersecurity Scholarship Program and educational opportunities like Microsoft Elevate are here to help. The goal of all these programs is to help foster a culture that puts security and continuous learning first for students and professionals alike.

Security-first in action: Franciscan Alliance

A great example of a security-first culture, especially around education and awareness training, is Franciscan Alliance, a non-profit Catholic health care organization based in Indiana. Franciscan Alliance employs a proactive and interactive strategy for cybersecurity awareness and employee education.

“We believe cybersecurity education should be continuous, engaging, and empowering—because informed employees are our strongest defense.”

—Jay Bhat, Chief Information Security Officer (CISO), Franciscan Alliance

The organization conducts monthly phishing simulations and quarterly assessments to expose staff to realistic scenarios consistently. Employees who do not pass the quarterly assessments are provided with additional training rather than being penalized, which supports a culture centered on learning and development. Training programs incorporate gamification elements to enhance accessibility and retention. Additionally, employees receive a monthly newsletter covering relevant security topics that support safe practices both professionally and personally.

During Cybersecurity Awareness Month, weekly editions are distributed, along with timely updates on emerging threats, including breaches and attacks. Franciscan Alliance also organizes threat briefings in partnership with external partners and utilizes resources such as Microsoft’s Cybersecurity Awareness materials to inform its training initiatives.

Developing security competencies in the age of AI

As organizations rapidly embrace AI, making security the first priority is not just a best practice—it’s a necessity. AI systems are powerful tools that can transform business productivity, but without robust governance and security measures, they can also introduce significant risks. To address these challenges and empower security-first leadership, we invite C-level executives to register for Microsoft’s upcoming webinar “Trust in AI: Accelerate Business Growth with Confidence,” which will feature critical discussions on how to build trust in AI for your organization.

Get started here:

Additionally, Microsoft’s Chief Product Officer of Responsible AI Sarah Bird will moderate the panel, “Cyber and AI, Strategic Risk and Competitive Advantage,” at the NASDAQ Summit on October 21, 2025, at the New York Stock Exchange, where industry experts will provide guidance on governance and security for AI. In this session, experts will discuss real-world use cases, regulatory developments, and the strategic implications of integrating AI into enterprise environments. Events such as these are incredible opportunities for executives to deepen their understanding and lead with confidence in the age of AI.

Get the Be Cybersmart Kit

Make the most out of Cybersecurity Awareness Month

We hope that these resources provide you with the learning, training, and confidence to set you and your organizations up for success—both this month and beyond. Now is the time to build a culture with a security-first mindset by making security part of your daily habits at work, home, and everywhere else. A security-first mindset means staying informed, proactively protecting digital assets, and encouraging others to do the same. Security is a team sport. By promoting vigilance and shared responsibility, we can create a safer world for all.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1April 2025 SFI progress report.

The post Cybersecurity Awareness Month: Security starts with you appeared first on The Microsoft Cloud Blog.

]]>
Unleashing the power of AI in India http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/ Thu, 06 Feb 2025 16:00:00 +0000 India has embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation.

The post Unleashing the power of AI in India appeared first on The Microsoft Cloud Blog.

]]>
This blog is part of the AI worldwide tour series, which highlights customers from around the globe who are embracing AI to achieve more. Read about how customers are using responsible AI to drive social impact and business transformation with Global AI innovation.

It’s no secret that India is well-positioned to be a global leader in the AI era, having embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation. Boasting a vast talent pool, proactive government initiatives, and a thriving startup ecosystem, India is uniquely equipped to leverage AI to solve pressing societal and business challenges and optimize operations across a wide array of civic and business verticals.

A long-standing partner in India’s technological growth, Microsoft has solidified its commitment with a US $3 billion investment to expand AI and Azure cloud infrastructure in the country. This initiative is designed to accelerate AI adoption across industries, empower businesses to integrate AI into critical processes, and nurture local talent to meet the evolving demands of the tech ecosystem. These efforts underscore Microsoft’s confidence in India’s position as a global leader in AI innovation and technological advancement.

AI business resources

Help your organization achieve its transformation goals

Local ingenuity was on full display during the Microsoft AI Tour stop in Bengaluru and New Delhi, where organizations showcased how they are leveraging AI to tackle complex challenges, streamline workflows, and drive transformative efficiencies across industries.

MakeMyTrip powers the future of travel with AI

MakeMyTrip (MMT), India’s leading online travel company, is at the forefront of enhancing the travel shopping experience with generative AI. Over its 24-year journey, MMT has served more than 77 million users, offering comprehensive travel booking services. A standout feature powered by generative AI is Myra, their conversational bot. MMT is integrating an AI-powered workflow within Myra to assist users seamlessly at every stage of their travel journey—from pre-trip planning to in-trip support and post-trip follow-up. Built using large language models (LLMs) and orchestrated via Microsoft Azure AI Foundry, these services ensure smooth assistance throughout the travel process. As one of the early adopters of generative AI in travel tech, MMT is leading the next generation of travel experiences.

Persistent Systems improves contract management with AI-powered agent

Persistent Systems, one of the world’s fastest-growing digital engineering and enterprise modernization service providers, faced recurring challenges surrounding their contract management: inefficient workflows and lengthy negotiation cycles were causing bottlenecks in an otherwise agile organization. Persistent turned to the power of generative AI and Microsoft’s technology stack to reimagine their approach to contract management, developing ContractAssIst, an AI-powered agent built using generative AI and Microsoft 365 Copilot, to transform collaboration and streamline internal contract negotiations. Built to help ensure security and access controls, the tool helps to enhance collaboration, streamline workflows, and accelerate decision-making. 

As a result, ContractAssIst has reduced emails during negotiations by 95% and cut navigation and negotiation time by 70%, a task that currently takes approximately 20 to 25 minutes. Persistent has deployed Microsoft 365 Copilot to nearly 2,000 users and plans to extend it to a broader audience.

LTIMindtree unlocks data management with Microsoft 365 Copilot

LTIMindtree, a global technology consulting and digital solutions company with more than 84,000 employees in more than 30 countries, is leveraging AI in innovative ways to drive digital transformation and enhance business and IT operations. They have demonstrated how Microsoft 365 Copilot technology and AI agents are transforming their critical business functions, such as pre-sales, resource management, and cyber security. For example, custom built AI agents assist the resource management teams to quickly find the right employees with relevant skills and match them to specific projects; and help pre-sales and account managers create high-quality responses using historical data to incoming requests for proposals (RFPs) and requests for information (RFIs). They are also using Microsoft Security Copilot to create a unified command center for investigations, threat intelligence, and incident response, empowering them to build a next-gen Security Operations Center (SOC). As a result, LTIMindtree has seen a 30% increase in overall employee efficiency, with 20% less time spent on emails and day-to-day task allocation.

Streamlining health claims with ICICI Lombard’s AI-powered solution

ICICI Lombard, a leading private insurer in India, has developed an innovative solution to streamline health claims processing. Traditionally, claim adjudicators manually filed claims, a time-consuming process involving the review of 20 pages of documents. Leveraging Microsoft Azure OpenAI Service, Azure AI Document Intelligence, and Azure AI Vision OCR service, ICICI Lombard’s new solution extracts relevant information from these documents, providing adjudicators with a consolidated view of the diagnosis and treatment. This innovation has reduced the time required to process claims by more than 50%.

eSanjeevani transforms healthcare access with innovative AI solutions

eSanjeevani, India’s National Telemedicine Service by the Ministry of Health and Family Welfare, has integrated AI-enabled tools to enhance care quality and streamline teleconsultations, promoting equitable access to healthcare across the country. Powered by Azure, it offers secure, scalable, and accessible doctor-to-doctor and doctor-to-patient teleconsultations. eSanjeevani is advancing its AI journey with Microsoft AI, enhancing productivity, data analysis, and user experience. These innovations are helping eSanjeevani set new benchmarks in telemedicine and digital healthcare services. It is also developing a proof of concept with Microsoft Copilot to transcribe doctor-patient conversations in real time for advanced speech analytics, aiding data-driven decisions. Serving more than 330 million patients, 98% from rural areas, eSanjeevani is today the world’s largest telemedicine initiative in primary healthcare.

AI for everyone in India

Satya Nadella speaking at the Microsoft AI Tour stop in India.
India AI Tour keynote with Satya Nadella, Chief Executive Officer.

India’s AI journey is not just about innovation, it’s about transformation across industries and lives. From travel to healthcare, banking to engineering, the case studies showcased here demonstrate the immense potential of AI when paired with the right tools, partnerships, and vision. Microsoft’s investments and technologies have enabled organizations in India to tackle challenges, streamline processes, and unlock new levels of efficiency and growth. As India continues to lead in the global AI revolution, these examples serve as a testament to how AI can create meaningful impact, fostering a future where innovation drives progress for everyone.

Find the resources to support your AI journey

The post Unleashing the power of AI in India appeared first on The Microsoft Cloud Blog.

]]>
Hear from Microsoft Security experts at these top cybersecurity events in 2025 http://approjects.co.za/?big=en-us/security/blog/2025/02/03/hear-from-microsoft-security-experts-at-these-top-cybersecurity-events-in-2025/ Mon, 03 Feb 2025 17:00:00 +0000 If you’re looking to boost your skills and stay ahead of the threat landscape, join Microsoft Security at the top cybersecurity events in 2025.

The post Hear from Microsoft Security experts at these top cybersecurity events in 2025 appeared first on The Microsoft Cloud Blog.

]]>
Inspiration can spark in an instant when you’re at a conference. Perhaps you discover a new tool during a keynote that could save you hours of time. Or maybe a peer shares a story over coffee that makes you rethink an approach. One conversation, one session, or one event could give you fresh ideas, renewed excitement, and a vision for what to do next.

In the current AI landscape, inspiration and information are more important than ever for security professionals to stay ahead of threat actors. So if you’re looking to boost your skills and stay ahead of the threat landscape, join Microsoft Security at the top cybersecurity events in 2025.

Whether you join us at an industry staple like RSAC or one of our own events like Microsoft Secure, you can benefit in several key ways:

  • Get insights and strategies needed to overcome obstacles and drive your security initiatives forward with confidence.
  • See live demos of the latest products, product features, skills, and tools you can use in your work. Be among the first to hear about Microsoft Security innovations, such as Microsoft’s Secure Future Initiative and XSPA (cross-site port attack) updates attendees of Microsoft Ignite 2024 heard.
  • Learn from Microsoft Security experts on global threat intelligence.
  • Network with other like-minded security pros, learn best practices from your peers, and meet one-on-one with our experts.

Whatever your role, there’s an event for you and a path to successfully safeguarding your organization.

A group of men standing around a table with laptops

Microsoft at RSAC

From our signature Pre-Day to hands-on demos and one-on-one meetings, discover how Microsoft Security can give you the advantage you need in the era of AI.

Register now 

Conferences to inspire and engage everyone

Large crowd of people attending Microsoft Ignite in Chicago, November 2024.

Security professionals of all levels can benefit from attending one of the biggest cybersecurity events, including RSAC, Black Hat, plus two premier Microsoft events—Microsoft Secure (virtual) and Microsoft Ignite (in-person and virtual). If you love being the first to hear about Microsoft product innovations, don’t miss these Microsoft events with insights every security professional can put to good use.

Microsoft Secure

Date: April 9, 2025
Location: Online only

Microsoft Secure is Microsoft’s cybersecurity conference. This year’s one-hour digital showcase will spotlight AI-first, end-to-end security innovations with clear use cases and customer stories of how they use our tools daily. Attendees will deep-dive into cybersecurity products and strategies along with thousands of other cybersecurity professionals.

RSAC

Dates: April 27-May 1, 2025
Location: San Francisco, CA

RSAC 2025 is a can’t-miss security conference, bringing together more than 40,000 security professionals to discuss the latest cybersecurity challenges and innovation with the best of the best. With the theme of “Many Voices. One Community,” RSAC will feature keynotes, track sessions, interactive sessions, networking opportunities, and an expo designed to foster advanced security strategies.

Throughout RSAC, Microsoft Security will showcase end-to-end security innovations and share world class threat and regulatory intelligence to give you the advantage you need in the era of AI. From our signature Pre-Day to hands-on demos and one-on-one meetings, discover how Microsoft Security can give you the advantage you need in the era of AI.​ Check out the full Microsoft at RSAC experience.

Learn more about the Microsoft Events at RSA Conference 2025

Black Hat

Dates: August 2-7, 2025
Location: Las Vegas, NV

The Black Hat Conference is a premier learning event in the cybersecurity industry, known for its in-depth technical sessions and cutting-edge research presentations on topics like critical infrastructure and information security research news.

Microsoft is a key sponsor of the conference each year, where we showcase our latest discoveries and AI research on real-world problems and solutions. Last year, our AI Red Teaming in Practice training sessions and our AI Summit roundtables were a hit. Black Hat is also known for its security community celebrations, including the Cybersecurity Woman of the Year Awards and the Researcher celebrations, which we take part in every year.

Learn more about the Black Hat Conference 2025

Microsoft Ignite

Dates: November 17-21, 2025
Location: San Francisco, CA, and online

Microsoft Ignite is Microsoft’s biggest annual conference for developers, IT professionals, business leaders, security professionals, and partners. Thousands of security professionals like you attend every year to hear the biggest security product announcements from Microsoft Security and gain training and skilling to prepare for future advancements in AI. Security professionals of all levels can join interactive labs, workshops, keynotes, technical breakout sessions, demos, and more, led by Microsoft Security leaders and experts.

Over the past few years, we’ve really boosted Microsoft Security experiences at Microsoft Ignite. Last year, we hosted the Microsoft Ignite Security Forum for security leaders and two workshops on AI red teaming and Microsoft 365 Copilot deployment. Plus, we hosted more than 30 sessions demoing new features to help you secure your environment, use your favorite Microsoft tools safely and securely, and make sure your organizational processes prioritize security first.

If you attend Microsoft Ignite in person this year, you won’t want to miss our Security Leaders Dinner or the security community party. If you’re not able to attend in person, you can register for our virtual event.​ Sign up to learn more.

Learn more about Microsoft Ignite 2025

Events for security leaders and decision-makers

A woman presenting during the Microsoft AI Tour.

Microsoft AI Tour

Dates: Through May 30, 2025
Location: Multiple worldwide

The Microsoft AI Tour is a free, one-day event for executives that explores the ways AI can drive growth and create lasting value in multiple cities around the globe. Whether you’re a functional decision-maker who evaluates investments, an IT team member charged with security, or a CISO revamping your security strategy, there will be valuable security content tailored to your needs.

Microsoft Security’s top business leaders attend AI tour locations worldwide to share with you how Microsoft Security Copilot lets you protect at the speed and scale of AI. They are also available to meet with you.

Reserve your spot at an event near you

Event locationEvent date
Dubai, United Arab EmiratesFebruary 6, 2025
Singapore, Southeast AsiaFebruary 19, 2025
Tokyo, JapanFebruary 26-27, 2025
London, United KingdomMarch 5, 2025
Brussels, BelgiumMarch 25, 2025
Seoul, South KoreaMarch 26, 2025
Paris, FranceMarch 26, 2025
Madrid, SpainMarch 27, 2025
Tokyo, JapanMarch 27, 2025
Beijing, ChinaApril 23, 2025
Athens, GreeceMay 27-30, 2025

Gartner Security and Risk Management Summit

Dates: June 9-11, 2025
Location: National Harbor, MD

The Gartner Security and Risk Management Summit (Gartner SRM) explores trends in cybersecurity risk management, including the integration of generative AI, being an effective CISO, the importance of balancing response and recovery efforts with prevention, combating misinformation, and closing the cybersecurity skills gap to build a resilient workforce.

Microsoft Security executives host sessions at Gartner SRM to help you ensure the security of AI systems and adopt AI to drive innovation and efficiency. Our most popular topics center around securing and governing AI.

Learn more about the Gartner Security and Risk Management Summit

Events for technical and security practitioners

People attending the Microsoft booth at RSAC 2024.

Security teams look for conferences that provide specialized knowledge on the industry in which they work or on a narrow cybersecurity topic.

Legalweek

Dates: March 24-27, 2025
Location: New York, NY

Legalweek is a weeklong conference where approximately 6,000 members of the legal community will gather to network with their peers, explore emerging trends, spotlight the latest tech, and offer a roadmap through industry shifts. Topics explored at past Legalweek conferences include the ethical and regulatory impact of using your data to train AI, litigation in the age of cybersecurity, and maximizing efficiency and legal automation.  

This year, we’ll be sponsoring three sessions on AI and one on collaboration in complex litigation. As in years past, Microsoft is hosting an Executive Breakfast at Legalweek from 7:30 AM ET-8:45 AM ET on Tuesday, March 25, 2025. RSVP today and stop by Booth #3103 in New York Hilton Midtown Americas Hall 2 to learn more about the latest Microsoft Purview innovations. If you’d like to meet with our team while at Legalweek, sign up for a one-on-one meeting.

Learn more about Legalweek 2025

Identiverse

Dates: June 3-6, 2025
Location: Las Vegas, NV

Limiting access to AI, apps, and resources to those with the proper permissions is a crucial part of security. The Identiverse conference provides education, collaboration, and insight into the future of identity security. More than 2,500 attendees will share insights, develop new ideas, and advance the state of modern digital identity and security.

The event features sessions on best practices, industry trends, and latest technologies; an exhibition hall to showcase the latest identity solution innovations; and networking opportunities. Microsoft will host a booth where attendees can connect with Microsoft Security experts and leaders.

Learn more about Identiverse 2025

Events for developers

The cybersecurity talent shortage is requiring many to step up even if cybersecurity isn’t in their official job description. If you are an IT professional being tasked with cybersecurity or someone with an eagerness to learn cybersecurity tactics, join our Microsoft events aimed at helping you uplevel your cybersecurity skills.

Microsoft Build

Dates: May 19-22, 2025
Location: Seattle, WA

Security is a team sport and developers are increasingly the first string team members who build security into the development of applications. Microsoft Build Conference 2025 is Microsoft’s developer-focused event. It will showcase exciting updates and innovations from Microsoft Security for developers to create AI-enabled security solutions for their organizations.

The event includes connection opportunities, demos, and security-focused sessions. Past topics have included using AI to accelerate development processes, tools for enhancing the developer experience, and strategies for building in the cloud. Stay up to date on Microsoft Build news and find out when registration is open.

Learn more about the Microsoft Build Conference 2025

Find your inspiration at an event this year

Cybersecurity events foster a culture of continuous learning and adaptation, empowering you to stay ahead of emerging cyberthreats and maintain a resilient security posture. The ideas will flow freely at these events. Whether you attend one of the biggest conferences of the year or a smaller event (or both), you’ll be in good company. Microsoft Security will be there be, too, excited to share and eager to learn.

Hope to see you at a future event!

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Hear from Microsoft Security experts at these top cybersecurity events in 2025 appeared first on The Microsoft Cloud Blog.

]]>
Making it easier for companies to build and ship AI people can trust https://news.microsoft.com/source/features/ai/making-it-easier-for-companies-to-build-and-ship-ai-people-can-trust/ Wed, 22 Jan 2025 16:00:00 +0000 Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves.

The post Making it easier for companies to build and ship AI people can trust appeared first on The Microsoft Cloud Blog.

]]>
Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves. Leaders worry about the risk of AI generating incorrect or harmful information, leaking sensitive data, being hijacked by attackers or violating privacy laws — and they’re sometimes ill-equipped to handle the risks.  

“Organizations care about safety and security along with quality and performance of their AI applications,” says Sarah Bird, chief product officer of Responsible AI at Microsoft. “But many of them don’t understand what they need to do to make their AI trustworthy, or they don’t have the tools to do it.”  

To bridge the gap, Microsoft provides tools and services that help developers build and ship trustworthy AI systems, or AI built with security, safety and privacy in mind. The tools have helped many organizations launch technologies in complex and heavily regulated environments, from an AI assistant that summarizes patient medical records to an AI chatbot that gives customers tax guidance.  

The approach is also helping developers work more efficiently, says Mehrnoosh Sameki, a Responsible AI principal product manager at Microsoft. 

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools.

“It’s very easy to get to the first version of a generative AI application, but people slow down drastically before it goes live because they’re scared it might expose them to risk, or they don’t know if they’re complying with regulations and requirements,” she says. “These tools expedite deployment and give peace of mind as you go through testing and safeguarding your application.”  

The tools are part of a holistic method that Microsoft provides for building AI responsibly, honed by expertise in identifying, measuring, managing and monitoring risk in its own products — and making sure each step is done. When generative AI first emerged, the company assembled experts in security, safety, fairness and other areas to identify foundational risks and share documentation, something it still does today as technology changes. It then developed a thorough approach for mitigating risk and tools for putting it into practice.  

The approach reflects the work of an AI Red Team that identifies emerging risks like hallucinations and prompt attacks, researchers who study deepfakesmeasurement experts who developed a system for evaluating AI, and engineers who build and refine safety guardrails. Tools include the open source framework PyRIT for red teams to identify risks, automated evaluations in Azure AI Foundry for continuously measuring and monitoring risks, and Azure AI Content Safety for detecting and blocking harmful inputs and outputs.  

Microsoft also publishes best practices for choosing the right model for an application, writing system messages and designing user experiences as part of building a robust AI safety system.  

“We use a defense-in-depth approach with many layers protecting against different types of risks, and we’re giving people all the pieces to do this work themselves,” Bird says. 

For the tax-preparation company that built a guidance chatbot, the capability to correct AI hallucinations was particularly important for providing accurate information, says Sameki. The company also made its chatbot more secure, safe and private with filters that block prompt attacks, harmful content and personally identifiable information.  

Making our own AI systems trustworthy is foundational in what we do, and we want to empower customers to do the same.

Sarah Bird, chief product officer of Responsible AI

She says the health care organization that created the summarization assistant was especially interested in tools for improving accuracy and creating a custom filter to make sure the summaries didn’t omit key information.  

“A lot of our tools help as debugging tools so they could understand how to improve their application,” Sameki says. “Both companies were able to deploy faster and with a lot more confidence.”  

Microsoft is also helping organizations improve their AI governance, a system of tracking and sharing important details about the development, deployment and operation of an application or model. Available in private preview in Azure AI Foundry, AI reports will give organizations a unified platform for collaborating, complying with a growing number of AI regulations and documenting evaluation insights, potential risks and mitigations.

“It’s hard to know that all the pieces are working if you don’t have the right governance in place,” says Bird. “We’re making sure that Microsoft’s AI systems are compliant, and we’re sharing best practices, tools and technologies that help customers with their compliance journey.”  

The work is part of Microsoft’s goal to help people do more with AI and share learnings that make the work easier for everyone.  

“Making our own AI systems trustworthy is foundational in what we do, and we want to empower customers to do the same,” Bird says. 

Learn more about Microsoft’s Responsible AI work.

Lead illustration by Makeshift Studios / Rocio Galarza. Story published on January 22, 2025

The post Making it easier for companies to build and ship AI people can trust appeared first on The Microsoft Cloud Blog.

]]>
Enhancing AI safety: Insights and lessons from red teaming http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/14/enhancing-ai-safety-insights-and-lessons-from-red-teaming/ Tue, 14 Jan 2025 16:00:00 +0000 Drawing from our experience, we’ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.

The post Enhancing AI safety: Insights and lessons from red teaming appeared first on The Microsoft Cloud Blog.

]]>
In an age where generative AI is transforming industries and reshaping daily interactions, helping ensure the safety and security of this technology is paramount. As AI systems grow in complexity and capability, red teaming has emerged as a central practice for identifying risks posed by these systems. At Microsoft, the AI red team (AIRT) has been at the forefront of this practice, red teaming more than 100 generative AI products since 2018. Along the way, we’ve gained critical insights into how to conduct red teaming operations, which we recently shared in our whitepaper, “Lessons From Red Teaming 100 Generative AI Products.”

This blog outlines the key lessons from the whitepaper, practical tips for AI red teaming, and how these efforts improve the safety and reliability of AI applications like Microsoft Copilot.

What is AI red teaming?

AI red teaming is the practice of probing AI systems for security vulnerabilities and safety risks that could cause harm to users. Unlike traditional safety benchmarking, red teaming focuses on probing end-to-end systems—not just individual models—for weaknesses. This holistic approach allows organizations to address risks that emerge from the interactions among AI models, user inputs, and external systems.

8 lessons from the front lines of AI red teaming

Drawing from our experience, we’ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.

1. Understand system capabilities and applications

AI red teaming should start by understanding how an AI system could be misused or cause harm in real-world scenarios. This means focusing on the system’s capabilities and where it could be applied, as different systems have different vulnerabilities based on their design and use cases. By identifying potential risks up front, red teams can prioritize testing efforts to uncover the most relevant and impactful weaknesses.

Example: Large language models (LLMs) are prone to generating ungrounded content, often referred to as “hallucinations.” However, the impact created by this weakness varies significantly depending on the application. For example, the same LLM could be used as a creative writing assistant and to summarize patient records in a healthcare context.

2. Complex attacks aren’t always necessary

Attackers often use simple and practical methods, like hand crafting prompts and fuzzing, to exploit weaknesses in AI systems. In our experience, relatively simple attacks that target weaknesses in end-to end systems are more likely to be successful than complex algorithms that target only the underlying AI model. AI red teams should adopt a system-wide perspective to better reflect real-world threats and uncover meaningful risks.

Example: Overlaying text on an image to trick an AI model into generating content that could aid in illegal activities.

Example of how overlaying text on an image can trick an AI model intro generating content that could aid in illegal activities—in this scenario, providing information on how to commit identity theft.
Figure 1. Example of an image jailbreak to generate content that could aid in illegal activities.

3. AI red teaming is not safety benchmarking

The risks posed by AI systems are constantly evolving, with new attack vectors and harms emerging as the technology advances. Existing safety benchmarks often fail to capture these novel risks, so red teams must define new categories of harm and consider how they can manifest in real-world applications. In doing so, AI red teams can identify risks that might otherwise be overlooked.

Example: Assessing how a state-of-the-art large language model (LLM) could be used to automate scams and persuade people to engage in risky behaviors.

4. Leverage automation for scale

Automation plays a critical role in scaling AI red teaming efforts by enabling faster and more comprehensive testing of vulnerabilities. For example, automated tools (which may, themselves, be powered by AI) can simulate sophisticated attacks and analyze AI system responses, significantly extending the reach of AI red teams. This shift from fully manual probing to red teaming supported by automation allows organizations to address a much broader range of risks.

What is PyRIT?

Learn more ↗

Example: Microsoft AIRT’s Python Risk Identification Tool (PyRIT) for generative AI, an open-source framework, can automatically orchestrate attacks and evaluate AI responses, reducing manual effort and increasing efficiency.

5. The human element remains crucial

Despite the benefits of automation, human judgment remains essential for many aspects of AI red teaming including prioritizing risks, designing system-level attacks, and assessing nuanced harms. In addition, many risks require subject matter expertise, cultural understanding, and emotional intelligence to evaluate, underscoring the need for balanced collaboration between tools and people in AI red teaming.

Example: Human expertise is vital for evaluating AI-generated content in specialized domains like CBRN (chemical, biological, radiological, and nuclear), testing low-resource languages with cultural nuance, and assessing the psychological impact of human-AI interactions.

6. Responsible AI risks are pervasive but complex

Harms like bias, toxicity, and the generation of illegal content are more subjective and harder to measure than traditional security risks, requiring red teams to be on guard against both intentional misuse and accidental harm caused by benign users. By combining automated tools with human oversight, red teams can better identify and address these nuanced risks in real-world applications.

Example: A text-to-image model that reinforces stereotypical gender roles, such as depicting only women as secretaries and men as bosses, based on neutral prompts.

This series of four images shows how a neutral text prompt inputted into in a text-to-image generator could result in an image that reinforces stereotypical gender roles.
Figure 2. Four images generated by a text-to-image model given the prompt “Secretary talking to boss in a conference room, secretary is standing while boss is sitting.”

7. LLMs amplify existing security risks and introduce new ones

Most AI red teams are familiar with attacks that target vulnerabilities introduced by AI models, such as prompt injections and jailbreaks. However, it is equally important to consider existing security risks and how these can manifest in AI systems including outdated dependencies, improper error handling, lack of input sanitization, and many other well-known vulnerabilities.

Example: Attackers exploiting a server-side request forgery (SSRF) vulnerability introduced by an outdated FFmpeg version in a video-processing generative AI application.

This illustration shows the step-by-step actions of a SSRF vulnerability in a generational AI video service and how an outdated FFmpeg version can make the service vulnerable to attack.
Figure 3. Illustration of the SSRF vulnerability in the generative AI application.

8. The work of securing AI systems will never be complete

AI safety is not just a technical problem; it requires robust testing, ongoing updates, and strong regulations to deter attacks and strengthen defenses. While no system can be entirely risk-free, combining technical advancements with policy and regulatory measures can significantly reduce vulnerabilities and increase the cost of attacks.

Example: Iterative “break-fix” cycles, which perform multiple rounds of red teaming and mitigation to ensure that defenses evolve alongside emerging threats.

The road ahead: Challenges and opportunities of AI red teaming

AI red teaming is still a nascent field with significant room for growth. Some pressing questions remain:

implement generative AI across the organization

Explore how ›

  • How can red teaming practices evolve to probe for dangerous capabilities in AI models like persuasion, deception, and self-replication?
  • How do we adapt red teaming practices to different cultural and linguistic contexts as AI systems are deployed globally?
  • What standards can be established to make red teaming findings more transparent and actionable?

Addressing these challenges will require collaboration across disciplines, organizations, and cultural boundaries. Open-source tools like PyRIT are a step in the right direction, enabling wider access to AI red teaming techniques and fostering a community-driven approach to AI safety.

Next steps: Building a safer AI future with AI red teaming

AI red teaming is essential for helping ensure safer, more secure, and responsible generative AI systems. As adoption grows, organizations must embrace proactive risk assessments grounded in real-world threats. By applying key lessons—like balancing automation with human oversight, addressing responsible AI harms, and prioritizing ethical considerations—red teaming helps build systems that are not only resilient but also aligned with societal values.

AI safety is an ongoing journey, but with collaboration and innovation, we can meet the challenges ahead. Dive deeper into these insights and strategies by reading the full whitepaper: Lessons From Red Teaming 100 Generative AI Products.

The post Enhancing AI safety: Insights and lessons from red teaming appeared first on The Microsoft Cloud Blog.

]]>
More value, less risk: How to implement generative AI across the organization securely and responsibly http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/ Mon, 04 Nov 2024 16:00:00 +0000 http://approjects.co.za/?big=en-us/innovation/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/ The technology landscape is undergoing a massive transformation, and AI is at the center of this change.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
The technology landscape is undergoing a massive transformation, and AI is at the center of this change—posing both new opportunities as well as new threats.  While AI can be used by adversaries to execute malicious activities, it also has the potential to be a game changer for organizations to help defeat cyberattacks at machine speed. Already today generative AI stands out as a transformative technology that can help boost innovation and efficiency. To maximize the advantages of generative AI, we need to strike a balance between addressing the potential risks and embracing innovation. In our recent strategy paper, “Minimize Risk and Reap the Benefits of AI,” we provide a comprehensive guide to navigating the challenges and opportunities of using generative AI.

Minimize Risk and Reap the Benefits of AI

Addressing security concerns and implementing safeguards

According to a recent survey conducted by ISMG, the top concerns for both business executives and security leaders on using generative AI in their organization range, from data security and governance, transparency and accountability to regulatory compliance.1 In this paper, the first in a series on AI compliance, governance, and safety from the Microsoft Security team, we provide business and technical leaders with an overview of potential security risks when deploying generative AI, along with insights into recommended safeguards and approaches to adopt the technology responsibly and effectively.

Learn how to deploy generative AI securely and responsibly

In the paper, we explore five critical areas to help ensure the responsible and effective deployment of generative AI: data security, managing hallucinations and overreliance, addressing biases, legal and regulatory compliance, and defending against threat actors. Each section provides essential insights and practical strategies for navigating these challenges. 

An infographic displaying the top 5 security and business leader concerns: data security, hallucinations, threat actors, biases, and legal and regulatory

Data security

build a foundation for AI success

Explore governance ›

Data security is a top concern for business and cybersecurity leaders. Specific worries include data leakage, over-permissioned data, and improper internal sharing. Traditional methods like applying data permissions and lifecycle management can enhance security. 

Managing hallucinations and overreliance

Generative AI hallucinations can lead to inaccurate data and flawed decisions. We explore techniques to help ensure AI output accuracy and minimize overreliance risks, including grounding data on trusted sources and using AI red teaming. 

Defending against threat actors

Threat actors use AI for cyberattacks, making safeguards essential. We cover protecting against malicious model instructions, AI system jailbreaks, and AI-driven attacks, emphasizing authentication measures and insider risk programs. 

Grow Your Business with AI You Can Trust

Addressing biases

Reducing bias is crucial to help ensure fair AI use. We discuss methods to identify and mitigate biases from training data and generative systems, emphasizing the role of ethics committees and diversity practices.

Microsoft’s journey to redefine legal support with AI

All in on AI ›

Navigating AI regulations is challenging due to unclear guidelines and global disparities. We offer best practices for aligning AI initiatives with legal and ethical standards, including establishing ethics committees and leveraging frameworks like the NIST AI Risk Management Framework.

Explore concrete actions for the future

Explore security innovations

Microsoft at RSAC 2025 ↗

As your organization adopts generative AI, it’s critical to implement responsible AI principles—including fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability. In this paper, we provide an effective approach that uses the “map, measure, and manage” framework as a guide; as well as explore the importance of experimentation, efficiency, and continuous improvement in your AI deployment.

I’m excited to launch this series on AI compliance, governance, and safety with a strategy paper on minimizing risk and enabling your organization to reap the benefits of generative AI. We hope this series serves as a guide to unlock the full potential of generative AI while ensuring security, compliance, and ethical use—and trust the guidance will empower your organization with the knowledge and tools needed to thrive in this new era for business.

Additional resources

Get more insights from Bret Arsenault on emerging security challenges from his Microsoft Security blogs covering topics like next generation built-in security, insider risk management, managing hybrid work, and more.


1, 2 ISMG’s First annual generative AI study – Business rewards vs. security risks: Research report, ISMG.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
AI safety first: Protecting your business and empowering your people http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/ Thu, 31 Oct 2024 15:00:00 +0000 Microsoft has created some resources like the Be Cybersmart Kit to help organizations learn how to protect themselves.

The post AI safety first: Protecting your business and empowering your people appeared first on The Microsoft Cloud Blog.

]]>

Every technology can be used for good or bad. This was as true for fire and for writing as it is for search engines and for social networks, and it is very much true for AI. You can probably think of many ways that these latter two have helped and harmed in your own life—and you can probably think of the ways they’ve harmed more easily, because those stick out in our minds, while the countless ways they helped (finding your doctor, navigating to their office, the friends you made, the jobs you got) fade into the background of life. You’re not wrong to think this: when a technology is new it’s unfamiliar, and every aspect of it attracts our attention—how often do you get astounded by the existence of writing nowadays?—and when it doesn’t work, or gets misused, it attracts our attention a lot.

The job of the people who build technologies is to make them as good as possible at helping, and as bad as possible at harming. That’s what my job is: as CVP and Deputy CISO of AI Safety and Security at Microsoft, I have the rare privilege of leading a team whose job is to look at every aspect of every AI system we build, and figure out ways to make them safer and more effective. We use the word “safety” very intentionally, because our work isn’t just about security, or privacy, or abuse; our scope is simply “if it involves AI, and someone or something could get hurt.”

But the thing about tools is that no matter how safe you make them, they can go wrong and they can be misused, and if AI is going to be a major part of our lives—which it almost certainly is—then we all need to learn how to understand it, how to think about it, and how to keep ourselves safe both with and from it. So as part of Cybersecurity Awareness Month, we’ve created some resources like the Be Cybersmart Kit to help individuals and organizations learn about some of the most important risks and how to protect themselves.

Cybersecurity awareness

Explore cybersecurity awareness resources and training

I’d like to focus on the three risks that are most likely to affect you directly as individuals and organizations in the near future: overreliance, deepfakes, and manipulation. The most important lesson is that AI safety is about a lot more than how it’s built—it’s about the ways we use it.

Overreliance on AI

Microsoft at RSAC 2025

Explore Security innovations ↗

Because my job has “security” in the title, when people ask me about the number one risk from AI they often expect me to talk about sophisticated cyberattacks. But the reality is that the number one way in which people get hurt by AI is by not knowing when (not) to trust it. If you were around in the late 1990s or early 2000s, you might remember a similar problem with search engines: people were worried that if people saw something on the Internet, all nicely written and formatted, they would assume whatever they read was true—and unfortunately, this worry was well-founded. This might seem ridiculous to us with twenty years of additional experience with the Internet; didn’t people know that the Internet was written by people? Had they ever met people? But at the time, very few people ever encountered professionally-formatted text with clean layouts that wasn’t the result of a lengthy editorial process; our instincts for what “looked reputable” were wrong. Today’s AI has a similar concern because it communicates with you, and we aren’t used to things that speak to us in natural language not understanding basic things about our lives.

We call this problem “overreliance,” and it comes in four basic shapes:

  • Naive overreliance happens when users simply don’t realize that just because responses from AI sound intelligent and well-reasoned, that doesn’t mean the responses actually are smart. They treat the AI like an expert instead of like a helpful, but sometimes naive, assistant.
  • Rushed overreliance happens when people know they need to check, but they just don’t have time to—maybe they’re in a fast-paced environment, or they have too many things to check one by one, or they’ve just gotten used to clicking “accept.”
  • Forced overreliance is what happens when users can’t check, even if they want to; think of an AI helping a non-programmer write a complex website (are you going to check the code for bugs?) or vision augmentation for the blind.
  • Motivated overreliance is maybe the sneakiest: it happens when users have an answer they want to get, and keep asking around (or rephrasing the question, or looking at different information) until they get it.

In each case, the problem with overreliance is that it undermines the human role in oversight, validation, and judgment, which is crucial in preventing AI mistakes from leading to negative outcomes.

How to stay safe

The most important thing you can do to protect yourself is to understand that AI systems aren’t the infallible computers of science fiction. The best way to think of them is as earnest, smart, junior colleagues—excited to help and sometimes really smart but sometimes also really dumb. In fact, this rule applies to a lot more than just overreliance: we’ve found that asking “how would I make this safe if it were a person instead of an AI?” is one of the most reliable ways to secure an AI system against a huge range of risks.

  1. Treat AI as a tool, not a decision-maker: Always verify the AI’s output, especially in critical areas. You wouldn’t hand a key task to a new hire and assume what they did is perfect; treat AI the same way. Whether it’s generating code or producing a report, review it carefully before relying on it.
  2. Maintain human oversight: Think of this as building a business process. If you’re going to be using an AI to help make decisions, who is going to cross-check that? Will someone be overseeing the results for compliance, maybe, or doing a final editorial pass? This is especially true in high-stakes or regulated environments where errors could have serious consequences.
  3. Use AI for brainstorming: AI is at its best when you ask it to lean into its creativity. It’s especially good at helping come up with ideas and interactively brainstorming. Don’t ask AI to do the job for you; ask AI to come up with an idea for your next step, think about it and maybe tweak it a bit, then ask it about its thoughts for what to do next. This way its creativity is boosting yours, while your eye is still on whether the result is what you want.

Train your team to know that AI can make mistakes. When people understand AI’s limitations, they’re less likely to trust it blindly.

Impersonation using AI

Fighting deepfakes with more transparency

Read more ↗

Deepfakes are highly realistic images, recordings, and videos created by AI. They’re called “fakes” when they’re used for deceptive purposes—and both this threat and the next one are about deception. Impersonation is when someone uses a deepfake to convince you that you’re talking to someone that you aren’t. This threat can have serious implications for businesses, as bad actors can use deepfake technology to deceive others into making decisions based on fraudulent information.

Imagine someone creates a deepfake of your chief finance officer’s voice and uses it to convince an employee to authorize a fraudulent transfer. This isn’t hypothetical—it already happened. A company in Hong Kong was taken for $25.6 million with the use of this exact technique.1

The real danger lies in how convincingly these AI-generated voices and videos can mimic trusted individuals, making it hard to know who you’re talking to. Traditional methods of identifying people—like hearing their voice on the phone or seeing them on a video call—are no longer reliable.

How to stay safe

As deepfakes become more compelling, the best defense is to communicate with people in ways where recognizing their face or voice isn’t the only thing you’re relying on. That means using authenticated communication channels like Microsoft Teams or email rather than phone calls or SMS, which are trivial to fake. Within those channels, you need to check that you’re talking to the person you think you’re talking to, and that software (if built right) can help you do that.

In the Hong Kong example above, the bad actor sent an email from a fake but realistic-looking email address inviting the victim to a Zoom meeting on an attacker-controlled but realistically-named server, where they had a conversation with “coworkers” who were actually all deepfakes. Email services such as Outlook can prevent situations like this by vividly highlighting that this is a message from an unfamiliar email address and one that isn’t part of your company; enterprise video conferencing (VC) systems like Teams can identify that you’re connecting to a system outside your own company as a guest. Use tools that provide indicators like these and pay attention to them.

If you find that you need to talk over an unauthenticated channel—say, you get a phone call from a family member in a bad situation and desperately needing you to send them money, or you get a WhatsApp message from an unfamiliar number—consider pre-arranging some secret code words with people you know so you can identify that they’re really who they say they are.

All of these are examples of a familiar technique that we use in security called multi-factor authentication (MFA), which is about using multiple means to verify someone is who they say they are. If you communicate over an authenticated channel, an attacker has to both compromise an account on your service (which itself should be protected by multiple factors) and create a convincing deepfake of that particular person. Forcing attackers to simultaneously do multiple different attacks against the same target at once makes the job exponentially harder for them. Most important services you use (email, social networks, and so on) allow you to set up MFA, and you should always do this when you can—preferably using “strong” MFA methods like physical keys or mobile apps, rather than weak methods like SMS, which are easily faked. According to our latest Microsoft Digital Defense Report, implementing modern day MFA reduces the likelihood of account compromise by 99.2%, significantly strengthening security and making unauthorized access more difficult for attackers to gain access. Although MFA techniques reduce the risk of identity compromise, many organization have been slow to adopt them. So, in January 2020, Microsoft introduced “security defaults” that turn on MFA while turning off basic and legacy authentication for new tenants and those with simple environments. The impact is clear: tenants that use security defaults experience 80% fewer compromises than tenants that don’t.

Scams, phishing, and social manipulation

What is phishing?

Learn more ↗

Beyond impersonating someone you know, AI can be used to power a whole range of attacks against people. The most expensive part of running a scam is taking the victim from the moment they first pick up the bait—answering an email message, perhaps—to the moment the scammers get what they want, be it your password or your money. Phishing campaigns often require work to create cloned websites to steal your credentials. Spear-phishing requires crafting a targeted set of lures for each potential victim. All of these are things that bad actors can do much more quickly and easily with AI tools to help them; they are, after all, the same tools that good actors use to automate customer service, website building, or document creation.

On top of scams, an increasingly important use of AI is in social manipulation, especially by actors with political goals—whether they be real advocacy organizations or foreign intelligence services. Since the mid-2010s, a key goal of many governments has been to sow confusion in the information world in order to sway political outcomes. This can include:

  • Convincing you that something is true when it isn’t—maybe that some kind of crime is rampant and you need to be protected from it, or that your political enemies have been doing something awful.
  • Convincing you that something isn’t true when it is—maybe that the bad things they were caught doing are actually deepfakes and frauds.
  • Simply convincing you that you can’t know what’s true, and you can’t do anything about it anyway, so you should just give up and stay home and not try to affect things.

There are a lot of tricks to doing this, but the most important ones are to make it feel like “everybody feels” something (by making sure you see just enough comments saying something that you figure it must be right, and you start repeating them, making other people believe it even more) and by telling you what you want to hear—creating false stories that line up with what you’re already expecting to believe. (Remember motivated overreliance? This is the same thing!)

AI is supercharging this space as well; it used to be that if you wanted to make sure that every hot conversation about a subject had people voicing your opinion, you needed either very non-human-sounding scripts, or you needed to hire a room full of operators. Today, all you need is a computer.

You can learn more about these attacks in on our threat intelligence website called Microsoft Security Insider.

How to stay safe

Take your current habits for being aware of potential scams or phishing attempts, and turn them up a notch. Just because something showed up at the top of search results doesn’t mean it’s legitimate. Look at things like URLs and source email addresses carefully, and see if you’re looking at something genuine or not.

To detect sophisticated phishing attempts, always verify both the source and the information with trusted channels. Cybercriminals often create a false sense of urgency, use amplification tactics, and mimic trustworthy sources to make their emails or content appear legitimate. Stay especially cautious when approached by unfamiliar individuals online, as most fraud or influence operations begin with a simple social media reply or a seemingly innocent “wrong number” message. (More sophisticated attacks will send friend requests to people, and once you get one person to say yes, your further requests to their friends will look more legitimate, since they now have mutual “friends” with the attacker.)

Social manipulation can affect you both directly (you see messages created by a threat actor) or indirectly (your friends saw those messages and unwittingly repeated them). This means that just because you hear something from someone you trust, you can’t be sure they didn’t get fooled too. If you’re forming your opinion about something, or if you need to make an important decision about whether you believe something or not, do some research, and figure out where a story came from. (And don’t forget that “they won’t tell you about this!” is a common thing to add to frauds, just to make you believe that the lack of news coverage makes it more true.)

But on the other hand, don’t refuse to believe anything you hear, because making you not believe true things is another way you can be cheated. Too much skepticism can get you in just as much trouble as not enough.

And ultimately, remember—social media and similar fora are designed to get you more engaged, activated, and excited, and when you’re in that state, you’re more likely to amplify any feelings you encounter. Often the best thing you can do is simply disconnect for a while and take a breather.

The power and limitations of AI

While AI is a powerful tool, its safety and effectiveness rely on more than just the technology itself. AI functions as one part of a larger, interconnected system that includes human oversight, business processes, and societal context. Navigating the risks—whether it’s overreliance, impersonation, cyberattacks, and social manipulation—requires not only understanding AI’s role but also the actions people must take to stay safe. As AI continues to evolve, staying safe means remaining active participants—adapting, learning, and taking intentional steps to protect both the technology and ourselves. We encourage you to use the resources on the cybersecurity awareness page and help educate your organization so as to create a security-first culture and secure our world—together.

Learn more about AI safety and security


1Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, CNN, 2024.

The post AI safety first: Protecting your business and empowering your people appeared first on The Microsoft Cloud Blog.

]]>