Insights for Security Professionals | The Microsoft Cloud Blog http://approjects.co.za/?big=en-us/microsoft-cloud/blog/job-function/security/ Build the future of your business with AI Wed, 25 Mar 2026 09:27:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 Navigating digital sovereignty at the frontier of transformation http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/25/navigating-digital-sovereignty-at-the-frontier-of-transformation/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/25/navigating-digital-sovereignty-at-the-frontier-of-transformation/#respond Wed, 25 Mar 2026 07:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7962 Digital sovereignty has become a practical leadership discipline grounded in risk management, continuity planning, and long-term accountability.

The post Navigating digital sovereignty at the frontier of transformation appeared first on The Microsoft Cloud Blog.

]]>
Digital sovereignty is no longer a theoretical debate or a narrow compliance exercise. For leaders across governments, regulated industries, and critical infrastructure sectors, it has become a practical leadership discipline grounded in risk management, continuity planning, and long-term accountability.

Over the past several years, we have seen customer concerns evolve materially. Early conversations focused primarily on privacy and lawful data handling. Today, those concerns have expanded. Leaders are now asking how they maintain operational continuity during disruption, how they adopt AI responsibly without losing control, and how they protect national, organizational, and customer interests in an increasingly volatile global environment.

These questions are not abstract. They surface in boardrooms, procurement decisions, architecture reviews, and crisis simulations. They reflect a broader shift in how trust is evaluated in digital systems. Today in Brussels we brought together attendees from around the world—policy makers, IT leaders, and enterprises—to approach these questions from the multiplicity of perspectives to move the conversation from headlines to action.

From privacy to resilience and beyond

Privacy remains foundational. But it is no longer the sole lens through which sovereignty is assessed.

Customers are increasingly concerned about business continuity in the face of cyber incidents, geopolitical tension, supply chain disruption, and network instability. They want to understand how critical workloads operate if connectivity is constrained, if dependencies fail, or if policy conditions change with little warning.

At the same time, innovation pressures have intensified. AI is becoming central to public service delivery, national competitiveness, and economic growth. Organizations cannot afford to pause progress while sovereignty questions are debated in isolation. They need approaches that allow them to move forward responsibly, balancing opportunity with control.

What we hear consistently is this: sovereignty concerns will continue to evolve. Any approach that treats them as static is already behind.

For four decades, Microsoft has operated under some of the world’s most demanding data protection, competition, and digital governance frameworks. Working closely with European institutions, regulators, and customers has shaped how we think about sovereignty—not as a regional exception, but as a discipline that must function at scale, under scrutiny, and over time. That experience matters because many of the sovereignty questions now emerging globally were first tested in Europe, long before they became mainstream elsewhere.

A consultative approach to risk management

This is why we believe digital sovereignty must be approached as consultative risk management, not a checkbox or a predefined deployment model.

Every organization faces a unique mix of regulatory obligations, cyber risk, operational exposure, and innovation goals. Even within a single institution, sovereignty requirements differ by workload. Some demand strict isolation and local control. Others require global scale, advanced security capabilities, and rapid innovation.

Our role is to help customers navigate these tradeoffs deliberately. That means working with them to assess risk, align architecture to policy realities, and design environments that reflect both today’s constraints and tomorrow’s unknowns.

This work sits at the intersection of cybersecurity, compliance, resilience, and frontier transformation. It requires ongoing engagement, transparency, and the willingness to adapt as conditions change.

Digital sovereignty posture in practice

A digital sovereignty posture that is flexible recognizes that no single approach can address every requirement. Instead, it focuses on giving organizations options, visibility, and control across a continuum of environments.

Customers operating in public cloud environments expect clear data residency options, strong encryption and access controls, and visible operational discipline. Just as important, they look for transparency into how cloud systems are governed and how exceptional situations are managed, particularly as regulatory scrutiny increases.

Those expectations do not disappear when workloads move closer to the edge. In fact, they intensify. For workloads that require greater isolation, local processing, or operation in constrained environments, hybrid and disconnected solutions become essential. In February, Microsoft announced the expansion of disconnected operations, enabling customers to run critical workloads in air-gapped environments while retaining consistent governance and operational control. This capability extends cloud-based practices into disconnected settings, supporting operational continuity without abandoning security and innovation. 

That commitment shows up in concrete safeguards that customers can independently evaluate and apply. The EU Data Boundary is one example, supporting data storage and processing within the EU and European Free Trade Association (EFTA) regions for cloud services, alongside longstanding investments in encryption, access controls, auditability, and operational transparency. These measures provide practical mechanisms for aligning cloud operations with regulatory and risk requirements, rather than relying on abstract assurances. 

At the same time, we are expanding options across hybrid and private cloud environments to support continuity, resilience, and local control where required. These investments reflect a simple reality: customer needs are not converging toward one model. They are diversifying.

Underpinning all of this are Microsoft’s digital commitments, which frame how we approach privacy, security, transparency, and responsible AI. These commitments are not marketing statements. They guide how systems are built, operated, and governed, and they provide a foundation for long-term accountability.

Practical guidance for leaders navigating sovereignty

As digital sovereignty becomes embedded in policy and procurement decisions, leaders benefit from a practical lens. Based on what we hear from customers and stakeholders, there are a few consistent themes shaping successful approaches:

  • Sovereignty requirements will continue to expand beyond privacy to include continuity, resilience, and AI governance.
  • Risk management is now inseparable from digital transformation strategy.
  • Flexibility and optionality matter more than rigid architectures.
  • Transparency and accountability are as important as technical capability.
  • Sovereignty posture must consider protections against cyberthreats.

Addressing these realities requires partners who understand the full scope of the challenge and are willing to engage over the long term. It requires platforms and collaboration designed with sovereignty in mind from the start.

So what does this mean for you?

Digital sovereignty is not a destination. It is an ongoing discipline shaped by changing technology, regulation, and global conditions.

At Microsoft, we approach this work with humility and responsibility. We recognize that customer concerns will continue to evolve, and that our own platforms and practices must evolve with them. We remain committed to expanding our sovereign cloud continuum, strengthening our cloud capabilities, and delivering solutions that balance innovation with control.

Most importantly, we remain focused on delivery. Because in moments of uncertainty, what matters most is not what technology promises, but what it allows organizations to do with confidence.

Where does digital sovereignty go from here?

The future of digital sovereignty will be defined by implementation, not rhetoric. Success will depend on collaboration between governments, industry, and civil society, as well as a shared commitment to transparency and continuous improvement.

As we look ahead, our focus remains on helping organizations turn sovereignty principles into durable, scalable outcomes. That means continuing to invest in capabilities that support trust, engaging constructively with policymakers, and listening closely to the evolving needs of our customers.

Digital trust is built over time, through consistent action and openness, and that trust is one of the most important foundations we can help create.

The post Navigating digital sovereignty at the frontier of transformation appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2026/03/25/navigating-digital-sovereignty-at-the-frontier-of-transformation/feed/ 0
80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/ http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/#respond Tue, 17 Feb 2026 15:45:00 +0000 Read Microsoft’s new Cyber Pulse report for straightforward, practical insights and guidance on new cybersecurity risks.

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on The Microsoft Cloud Blog.

]]>
Today, Microsoft is releasing the new Cyber Pulse report to provide leaders with straightforward, practical insights and guidance on new cybersecurity risks. One of today’s most pressing concerns is the governance of AI and autonomous agents. AI agents are scaling faster than some companies can see them—and that visibility gap is a business risk.1 Like people, AI agents require protection through strong observability, governance, and security using Zero Trust principles. As the report highlights, organizations that succeed in the next phase of AI adoption will be those that move with speed and bring business, IT, security, and developer teams together to observe, govern, and secure their AI transformation.

Read the latest Cyber Pulse report

Agent building isn’t limited to technical roles; today, employees in various positions create and use agents in daily work. More than 80% of Fortune 500 companies today use AI active agents built with low-code/no-code tools.2 AI is ubiquitous in many operations, and generative AI-powered agents are embedded in workflows across sales, finance, security, customer service, and product innovation. 

With agent use expanding and transformation opportunities multiplying, now is the time to get foundational controls in place. AI agents should be held to the same standards as employees or service accounts. That means applying long‑standing Zero Trust security principles consistently:

  • Least privilege access: Give every user, AI agent, or system only what they need—no more.
  • Explicit verification: Always confirm who or what is requesting access using identity, device health, location, risk level.
  • Assume compromise can occur: Design systems expecting that cyberattackers will get inside.

These principles are not new, and many security teams have implemented Zero Trust principles in their organization. What’s new is their application to non‑human users operating at scale and speed. Organizations that embed these controls within their deployment of AI agents from the beginning will be able to move faster, building trust in AI.

The rise of human-led AI agents

The growth of AI agents expands across many regions around the world from the Americas to Europe, Middle East, and Africa (EMEA), and Asia.

A graph showing the percentages of the regions around the world using AI agents.

According to Cyber Pulse, leading industries such as software and technology (16%), manufacturing (13%), financial institutions (11%), and retail (9%) are using agents to support increasingly complex tasks—drafting proposals, analyzing financial data, triaging security alerts, automating repetitive processes, and surfacing insights at machine speed.3 These agents can operate in assistive modes, responding to user prompts, or autonomously, executing tasks with minimal human intervention.

A graphic showing the percentage of industries using agents to support complex tasks.
Source: Industry Agent Metrics were created using Microsoft first-party telemetry measuring agents build with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

And unlike traditional software, agents are dynamic. They act. They decide. They access data. And increasingly, they interact with other agents.

That changes the risk profile fundamentally.

The blind spot: Agent growth without observability, governance, and security

Despite the rapid adoption of AI agents, many organizations struggle to answer some basic questions:

  • How many agents are running across the enterprise?
  • Who owns them?
  • What data do they touch?
  • Which agents are sanctioned—and which are not?

This is not a hypothetical concern. Shadow IT has existed for decades, but shadow AI introduces new dimensions of risk. Agents can inherit permissions, access sensitive information, and generate outputs at scale—sometimes outside the visibility of IT and security teams. Bad actors might exploit agents’ access and privileges, turning them into unintended double agents. Like human employees, an agent with too much access—or the wrong instructions—can become a vulnerability. When leaders lack observability in their AI ecosystem, risk accumulates silently.

According to the Cyber Pulse report, already 29% of employees have turned to unsanctioned AI agents for work tasks.4 This disparity is noteworthy, as it indicates that numerous organizations are deploying AI capabilities and agents prior to establishing appropriate controls for access management, data protection, compliance, and accountability. In regulated sectors such as financial services, healthcare, and the public sector, this gap can have particularly significant consequences.

Why observability comes first

You can’t protect what you can’t see, and you can’t manage what you don’t understand. Observability is having a control plane across all layers of the organization (IT, security, developers, and AI teams) to understand:  

  • What agents exist 
  • Who owns them 
  • What systems and data they touch 
  • How they behave 

In the Cyber Pulse report, we outline five core capabilities that organizations need to establish for true observability and governance of AI agents:

  • Registry: A centralized registry acts as a single source of truth for all agents across the organization—sanctioned, third‑party, and emerging shadow agents. This inventory helps prevent agent sprawl, enables accountability, and supports discovery while allowing unsanctioned agents to be restricted or quarantined when necessary.
  • Access control: Each agent is governed using the same identity‑ and policy‑driven access controls applied to human users and applications. Least‑privilege permissions, enforced consistently, help ensure agents can access only the data, systems, and workflows required to fulfill their purpose—no more, no less.
  • Visualization: Real‑time dashboards and telemetry provide insight into how agents interact with people, data, and systems. Leaders can see where agents are operating, understanding dependencies, and monitoring behavior and impact—supporting faster detection of misuse, drift, or emerging risk.
  • Interoperability: Agents operate across Microsoft platforms, open‑source frameworks, and third‑party ecosystems under a consistent governance model. This interoperability allows agents to collaborate with people and other agents across workflows while remaining managed within the same enterprise controls.
  • Security: Built‑in protections safeguard agents from internal misuse and external cyberthreats. Security signals, policy enforcement, and integrated tooling help organizations detect compromised or misaligned agents early and respond quickly—before issues escalate into business, regulatory, or reputational harm.

Governance and security are not the same—and both matter

One important clarification emerging from Cyber Pulse is this: governance and security are related, but not interchangeable.

  • Governance defines ownership, accountability, policy, and oversight.
  • Security enforces controls, protects access, and detects cyberthreats.

Both are required. And neither can succeed in isolation.

AI governance cannot live solely within IT, and AI security cannot be delegated only to chief information security officers (CISOs). This is a cross functional responsibility, spanning legal, compliance, human resources, data science, business leadership, and the board.

When AI risk is treated as a core enterprise risk—alongside financial, operational, and regulatory risk—organizations are better positioned to move quickly and safely.

Strong security and governance do more than reduce risk—they enable transparency. And transparency is fast becoming a competitive advantage.

From risk management to competitive advantage

This is an exciting time for leading Frontier Firms. Many organizations are already using this moment to modernize governance, reduce overshared data, and establish security controls that allow safe use. They are proving that security and innovation are not opposing forces; they are reinforcing ones. Security is a catalyst for innovation.

According to the Cyber Pulse report, the leaders who act now will mitigate risk, unlock faster innovation, protect customer trust, and build resilience into the very fabric of their AI-powered enterprises. The future belongs to organizations that innovate at machine speed and observe, govern and secure with the same precision. If we get this right, and I know we will, AI becomes more than a breakthrough in technology—it becomes a breakthrough in human ambition.

Get the full Cyber Pulse report

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1Microsoft Data Security Index 2026: Unifying Data Protection and AI Innovation, Microsoft Security, 2026.

2Based on Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

3Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the last 28 days of November 2025.

4July 2025 multi-national survey of more than 1,700 data security professionals commissioned by Microsoft from Hypothesis Group.

Methodology:

Industry and Regional Agent Metrics were created using Microsoft first‑party telemetry measuring agents built with Microsoft Copilot Studio or Microsoft Agent Builder that were in use during the past 28 days of November 2025. 

2026 Data Security Index: 

A 25-minute multinational online survey was conducted from July 16 to August 11, 2025, among 1,725 data security leaders. 

Questions centered around the data security landscape, data security incidents, securing employee use of generative AI, and the use of generative AI in data security programs to highlight comparisons to 2024. 

One-hour in-depth interviews were conducted with 10 data security leaders in the United States and United Kingdom to garner stories about how they are approaching data security in their organizations. 

Definitions: 

Active Agents are 1) deployed to production and 2) have some “real activity” associated with them in the past 28 days.  

“Real activity” is defined as 1+ engagement with a user (assistive agents) OR 1+ autonomous runs (autonomous agents).  

The post 80% of Fortune 500 use active AI Agents: Observability, governance, and security shape the new frontier appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2026/02/10/80-of-fortune-500-use-active-ai-agents-observability-governance-and-security-shape-the-new-frontier/feed/ 0
From awareness to action: Building a security-first culture for the agentic AI era http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/12/10/from-awareness-to-action-building-a-security-first-culture-for-the-agentic-ai-era/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/12/10/from-awareness-to-action-building-a-security-first-culture-for-the-agentic-ai-era/#respond Wed, 10 Dec 2025 16:00:00 +0000 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/?p=7373 Microsoft helps leaders secure AI adoption with governance, training, and culture—turning cybersecurity into a growth and trust accelerator.

The post From awareness to action: Building a security-first culture for the agentic AI era appeared first on The Microsoft Cloud Blog.

]]>
The insights gained from Cybersecurity Awareness Month, right through to Microsoft Ignite 2025, demonstrate that security remains a top priority for business leaders. It serves as a strategic lever for organizational growth, fosters trust, and facilitates the advancement of AI innovation. The Work Trend Index 2025 indicates that over 80% of leaders are currently utilizing agents or plan to do so within the next 12 to 18 months. While AI introduces risks such as oversharing, data leakage, compliance gaps, and agent sprawl, business and security leaders can address these issues in part by: 

  1. Preparing for the integration of AI and agents.
  2. Strengthening training so that everyone has the necessary skills. 
  3. Fostering a culture that prioritizes cybersecurity. 

Preparing for the integration of AI and intelligent agents

Preparing for AI and agent integration calls for careful strategy, thoughtful business planning, and organization-wide adoption under solid governance, security, and management. Microsoft’s AI adoption model offers a step-by-step guide for businesses embarking on this journey and the guide offers actionable insights and solutions to manage AI risks.

Strengthening training so that everyone has the necessary skills

Technology alone isn’t enough. People are your strongest defense—and the foundation of trust. That’s why skilling emerged as a central theme throughout these past months and will continue beyond. Frontier Firms—those structured around on-demand intelligence and powered by “hybrid” teams of humans plus agents—lead by fostering a culture of continuous learning. Our blog “Building human-centric security skills for AI” offers insights and guidance you can apply in your organization.  

  • Lean into your unique human strengths: Your team’s judgment, creativity, and experience are irreplaceable. Take time to invest in upskilling and reskilling them, so they can confidently guide and manage AI tools responsibly and securely. Explore Microsoft Learn for Organizations for resources to support your learning journey.
  • Stay curious and agile through continuous learning: Building security resilience is an ongoing process. Regularly refresh your AI and security training, offer time and resources for employees to explore new skills, and create a supportive, engaging environment that motivates continuous growth. Find in AI Skills Navigator, our agentic learning space, AI and security training tailored to different roles.  

Investing in skilling doesn’t just reduce risk—it accelerates innovation by giving teams the confidence to explore new AI capabilities securely. 

Skilling is an ongoing practice that needs to constantly evolve alongside the business and technology landscape. Staying ahead requires an enterprise-wide strategy that aligns ever-changing business priorities with always-on skill-building. 

—Jeana Jorgensen, Corporate Vice President, Microsoft Learning

Fostering a culture that prioritizes security

As AI impacts everyone’s role, make security awareness and responsible AI practices shared priorities. Encourage your team to weave security thinking into their daily routines—creating a safer environment for all. As Vasu Jakkal, Corporate Vice President of Microsoft Security highlighted in her blog “Cybersecurity Awareness Month: Security starts with you,” it is critical that security become part of your organization’s culture and norms. 

Check out our new e-book, Skilling for Secure AI: How Frontier Firms Lead the Way for practical steps for leaders to upskill their workforce in identity management, data governance, and responsible AI practices.

From awareness to action

In the agentic AI era, people continue to be our most valuable resource. It’s essential to empower them with AI and equip them with the skills they need to use AI responsibly and securely. Cybersecurity awareness should go beyond designated months or campaigns; true awareness means taking meaningful action.   

Here are three actions you can take today to maximize your AI investments: 

  1. Share the Be Cybersmart Kit with your employees. It includes tips for protecting yourself from fraud and deepfakes, guidance on safe AI usage, and key security best practices.
  2. Invest in people: Focus on upskilling initiatives that support your AI transformation, cloud modernization, and security-first strategies.
  3. Champion a security-first culture: Ensure cybersecurity is integral to every business discussion and woven into your overall strategy. 

Microsoft guide for securing the AI-powered enterprise

A close up of a colorful swirl

The post From awareness to action: Building a security-first culture for the agentic AI era appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/12/10/from-awareness-to-action-building-a-security-first-culture-for-the-agentic-ai-era/feed/ 0
Future-proofing healthcare cybersecurity: What every leader should know http://approjects.co.za/?big=en-us/industry/blog/healthcare/2025/12/03/future-proofing-healthcare-cybersecurity-what-every-leader-should-know/ Wed, 03 Dec 2025 17:00:00 +0000 At the 2025 Scottsdale Institute CISO Summit, healthcare leaders are rethinking cybersecurity, including how collaboration and training build resilience across healthcare systems.

The post Future-proofing healthcare cybersecurity: What every leader should know appeared first on The Microsoft Cloud Blog.

]]>
Healthcare cybersecurity isn’t just about technology—it’s about people, trust, and the future of care.

Healthcare leaders today are navigating a landscape of escalating cyberthreats and increasing operational complexity. Cybersecurity is not just a technical requirement—it’s essential to building patient trust, ensuring care continuity, and enabling future innovation in healthcare.

At the 2025 Scottsdale Institute CISO Summit, top security leaders gathered to share real stories, big challenges, and practical solutions for keeping patient data safe in a rapidly changing world. As a follow-up, they released a report on the Future-Proofing Healthcare Cybersecurity: AI, Cloud Transformation, and Capabilities for Tomorrow.

Here are a few highlights:

Why cybersecurity matters more than ever

Healthcare is an often-targeted and heavily regulated industry with patient outcomes at stake.

  • Cyber threats are evolving fast. AI and cloud transformation are opening new doors for care, but also new risks. Cybercriminals are getting smarter, and healthcare organizations must keep pace to protect sensitive information and help ensure patient safety.
  • It’s personal. Healthcare leaders reminded us that every security decision impacts real people—patients, families, and staff. The goal is always to deliver the best care, safely.

What healthcare teams need to know about cybersecurity

1. Collaboration is critical

CEOs, CIOs, and CISOs must work together. Innovation and security go hand-in-hand, and strong partnerships help organizations stay ahead of threats.

2. AI opportunities and challenges

AI can make healthcare smarter and more efficient, but it also introduces new risks. Leaders must ask tough questions about how AI tools use data, how they’re trained, and how to keep them secure.

3. Training and upskilling

Investing in technology is only half of the battle. Staff need ongoing training to use new tools safely and effectively. Creative incentives—like paid training time or career pathways—help teams grow and adapt.

4. Breaking down silos

Legacy structures can slow progress. Integrated teams and cross-functional collaboration are key to finding and fixing vulnerabilities quickly.

5. Third-party risk management

Vendor relationships are more complex than ever. Organizations must raise the bar for vendor assessments, ensure business continuity, and educate users about risks.

6. Resilience and response

Prevention is important, but detection and rapid response are essential. AI-powered tools can help spot suspicious behavior, but human oversight remains crucial.

Patient safety, care continuity, and trust in healthcare depend on getting cybersecurity right

Healthcare organizations face a critical inflection point. Success will require:

  • Embracing AI-powered defenses
  • Building stronger networks among security professionals
  • Accelerating vendor sophistication
  • Developing agile incident response protocols

Security-first in action: St. Luke’s Health Network

For St. Luke’s University Health Network, protecting patient data is key to delivering great care. Serving people in Pennsylvania and New Jersey at 13 hospitals and 607 practices, including a number of specialties, it has a sizeable data estate to safeguard.

Succeeding at that vital mission got easier when St. Luke’s reduced its number of security tools and gained dramatically greater visibility into the data it needs to maintain security.

It replaced several third-party security solutions with Microsoft Sentinel, Microsoft Defender for Cloud, and Microsoft Defender for Office 365, adding to its Microsoft Security solution base for a unified security posture that helps security teams do what they do best: protect St. Luke’s from an ever-evolving threat landscape.

–David Finkelstein, Chief Information Security Officer, St. Luke’s University Health Network

Let’s build a secure future for healthcare, together

At Microsoft, we’re focused on helping organizations consolidate fragmented security capabilities and apply intelligence to deliver better outcomes. Since launching the Secure Future Initiative (SFI) in November 2023, Microsoft has mobilized the equivalent of more than 34,000 engineers to mitigate risk and improve security for Microsoft and our customers.¹

Guided by three security principles—secure by design, by default, and in operations—we have made measurable progress in the areas of culture, governance, and our six engineering pillars. Still, there is more to do, and teams across the company are working to improve the security of every product, address learnings from every incident, and continuously improve our methods and practices.

Microsoft has been a leader for years in developing AI technologies in accordance with responsible AI principles designed to meet compliance requirements, protect data and systems, and maintain customer trust.

Learn how AI can help fortify healthcare security and compliance


1 November 2025 Secure Future Initiative progress report, Microsoft

The post Future-proofing healthcare cybersecurity: What every leader should know appeared first on The Microsoft Cloud Blog.

]]>
Cybersecurity Awareness Month: Security starts with you http://approjects.co.za/?big=en-us/security/blog/2025/10/01/cybersecurity-awareness-month-security-starts-with-you/ http://approjects.co.za/?big=en-us/security/blog/2025/10/01/cybersecurity-awareness-month-security-starts-with-you/#respond Wed, 01 Oct 2025 16:00:00 +0000 Make the most out of Cybersecurity Awareness Month with resources from Microsoft.

The post Cybersecurity Awareness Month: Security starts with you appeared first on The Microsoft Cloud Blog.

]]>
At Microsoft, security is our number one priority, and we believe that cybersecurity is as much about people as it is about technology. As we move into October and kick off Cybersecurity Awareness Month, this time of year really makes me think about how important online safety is—not just at work, but for my family and friends too. I often find myself sharing tips with loved ones on how to stay safe online, because building strong security habits and keeping them top of mind has become a key part of how I protect myself and those around me.

Explore Microsoft Cybersecurity Awareness resources

As part of the Microsoft Secure Future Initiative (SFI), we have committed to embed security into every layer of our technology, culture, and governance—placing security above all else. Since its launch in November 2023, SFI has mobilized the equivalent of more than 34,000 engineers to proactively reduce risk and strengthen security across Microsoft and the products and services we offer our customers. A great example of this is mitigating advanced multifactor authentication attacks, where phishing-resistant multifactor authentication now protects 100% of production system accounts and 92% of employee productivity accounts. In addition, we continue to reduce the risk of compromise during new employee setup by enforcing video-based verification, now at 99%.1

Enabling your security-first approach

This year, we have also developed new resources and tools to support security professionals in keeping their organizations secure, particularly as we enter this next era of AI. Building upon our learnings with SFI, we have created SFI patterns and practices, which is a new library of actionable guidance designed to help organizations implement security at scale.

In addition to best practices for security professionals, we continue to add articles to our Be Cybersmart Kit, which is a great starting point for security professionals that need to educate their organizations on how to be safe. The Be Cybersmart Kit contains articles on AI safety, device security, domain impersonation, fraud, secure sign-in, and phishing. The kit is just one of the many resources available on the Microsoft Cybersecurity Awareness site

Be Cybersmart

Help educate everyone in your organization with cybersecurity awareness resources and training curated by the security experts at Microsoft.

Get the Be Cybersmart Kit.

Those seeking more in-depth resources can access expert-level learning paths, certifications, and technical documentation to continue their cybersecurity education. And for students pursuing the field of cybersecurity, the Microsoft Cybersecurity Scholarship Program and educational opportunities like Microsoft Elevate are here to help. The goal of all these programs is to help foster a culture that puts security and continuous learning first for students and professionals alike.

Security-first in action: Franciscan Alliance

A great example of a security-first culture, especially around education and awareness training, is Franciscan Alliance, a non-profit Catholic health care organization based in Indiana. Franciscan Alliance employs a proactive and interactive strategy for cybersecurity awareness and employee education.

“We believe cybersecurity education should be continuous, engaging, and empowering—because informed employees are our strongest defense.”

—Jay Bhat, Chief Information Security Officer (CISO), Franciscan Alliance

The organization conducts monthly phishing simulations and quarterly assessments to expose staff to realistic scenarios consistently. Employees who do not pass the quarterly assessments are provided with additional training rather than being penalized, which supports a culture centered on learning and development. Training programs incorporate gamification elements to enhance accessibility and retention. Additionally, employees receive a monthly newsletter covering relevant security topics that support safe practices both professionally and personally.

During Cybersecurity Awareness Month, weekly editions are distributed, along with timely updates on emerging threats, including breaches and attacks. Franciscan Alliance also organizes threat briefings in partnership with external partners and utilizes resources such as Microsoft’s Cybersecurity Awareness materials to inform its training initiatives.

Developing security competencies in the age of AI

As organizations rapidly embrace AI, making security the first priority is not just a best practice—it’s a necessity. AI systems are powerful tools that can transform business productivity, but without robust governance and security measures, they can also introduce significant risks. To address these challenges and empower security-first leadership, we invite C-level executives to register for Microsoft’s upcoming webinar “Trust in AI: Accelerate Business Growth with Confidence,” which will feature critical discussions on how to build trust in AI for your organization.

Get started here:

Additionally, Microsoft’s Chief Product Officer of Responsible AI Sarah Bird will moderate the panel, “Cyber and AI, Strategic Risk and Competitive Advantage,” at the NASDAQ Summit on October 21, 2025, at the New York Stock Exchange, where industry experts will provide guidance on governance and security for AI. In this session, experts will discuss real-world use cases, regulatory developments, and the strategic implications of integrating AI into enterprise environments. Events such as these are incredible opportunities for executives to deepen their understanding and lead with confidence in the age of AI.

Get the Be Cybersmart Kit

Make the most out of Cybersecurity Awareness Month

We hope that these resources provide you with the learning, training, and confidence to set you and your organizations up for success—both this month and beyond. Now is the time to build a culture with a security-first mindset by making security part of your daily habits at work, home, and everywhere else. A security-first mindset means staying informed, proactively protecting digital assets, and encouraging others to do the same. Security is a team sport. By promoting vigilance and shared responsibility, we can create a safer world for all.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1April 2025 SFI progress report.

The post Cybersecurity Awareness Month: Security starts with you appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2025/10/01/cybersecurity-awareness-month-security-starts-with-you/feed/ 0
Unleashing the power of AI in India http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/#respond Thu, 06 Feb 2025 16:00:00 +0000 India has embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation.

The post Unleashing the power of AI in India appeared first on The Microsoft Cloud Blog.

]]>
This blog is part of the AI worldwide tour series, which highlights customers from around the globe who are embracing AI to achieve more. Read about how customers are using responsible AI to drive social impact and business transformation with Global AI innovation.

It’s no secret that India is well-positioned to be a global leader in the AI era, having embraced the power of AI to reshape industries, drive innovation, and unlock new opportunities across the nation. Boasting a vast talent pool, proactive government initiatives, and a thriving startup ecosystem, India is uniquely equipped to leverage AI to solve pressing societal and business challenges and optimize operations across a wide array of civic and business verticals.

A long-standing partner in India’s technological growth, Microsoft has solidified its commitment with a US $3 billion investment to expand AI and Azure cloud infrastructure in the country. This initiative is designed to accelerate AI adoption across industries, empower businesses to integrate AI into critical processes, and nurture local talent to meet the evolving demands of the tech ecosystem. These efforts underscore Microsoft’s confidence in India’s position as a global leader in AI innovation and technological advancement.

AI business resources

Help your organization achieve its transformation goals

A decorative image of abstract art swirling in green, purple, and blue colors

Local ingenuity was on full display during the Microsoft AI Tour stop in Bengaluru and New Delhi, where organizations showcased how they are leveraging AI to tackle complex challenges, streamline workflows, and drive transformative efficiencies across industries.

MakeMyTrip powers the future of travel with AI

MakeMyTrip (MMT), India’s leading online travel company, is at the forefront of enhancing the travel shopping experience with generative AI. Over its 24-year journey, MMT has served more than 77 million users, offering comprehensive travel booking services. A standout feature powered by generative AI is Myra, their conversational bot. MMT is integrating an AI-powered workflow within Myra to assist users seamlessly at every stage of their travel journey—from pre-trip planning to in-trip support and post-trip follow-up. Built using large language models (LLMs) and orchestrated via Microsoft Azure AI Foundry, these services ensure smooth assistance throughout the travel process. As one of the early adopters of generative AI in travel tech, MMT is leading the next generation of travel experiences.

Persistent Systems improves contract management with AI-powered agent

Persistent Systems, one of the world’s fastest-growing digital engineering and enterprise modernization service providers, faced recurring challenges surrounding their contract management: inefficient workflows and lengthy negotiation cycles were causing bottlenecks in an otherwise agile organization. Persistent turned to the power of generative AI and Microsoft’s technology stack to reimagine their approach to contract management, developing ContractAssIst, an AI-powered agent built using generative AI and Microsoft 365 Copilot, to transform collaboration and streamline internal contract negotiations. Built to help ensure security and access controls, the tool helps to enhance collaboration, streamline workflows, and accelerate decision-making. 

As a result, ContractAssIst has reduced emails during negotiations by 95% and cut navigation and negotiation time by 70%, a task that currently takes approximately 20 to 25 minutes. Persistent has deployed Microsoft 365 Copilot to nearly 2,000 users and plans to extend it to a broader audience.

LTIMindtree unlocks data management with Microsoft 365 Copilot

LTIMindtree, a global technology consulting and digital solutions company with more than 84,000 employees in more than 30 countries, is leveraging AI in innovative ways to drive digital transformation and enhance business and IT operations. They have demonstrated how Microsoft 365 Copilot technology and AI agents are transforming their critical business functions, such as pre-sales, resource management, and cyber security. For example, custom built AI agents assist the resource management teams to quickly find the right employees with relevant skills and match them to specific projects; and help pre-sales and account managers create high-quality responses using historical data to incoming requests for proposals (RFPs) and requests for information (RFIs). They are also using Microsoft Security Copilot to create a unified command center for investigations, threat intelligence, and incident response, empowering them to build a next-gen Security Operations Center (SOC). As a result, LTIMindtree has seen a 30% increase in overall employee efficiency, with 20% less time spent on emails and day-to-day task allocation.

Streamlining health claims with ICICI Lombard’s AI-powered solution

ICICI Lombard, a leading private insurer in India, has developed an innovative solution to streamline health claims processing. Traditionally, claim adjudicators manually filed claims, a time-consuming process involving the review of 20 pages of documents. Leveraging Microsoft Azure OpenAI Service, Azure AI Document Intelligence, and Azure AI Vision OCR service, ICICI Lombard’s new solution extracts relevant information from these documents, providing adjudicators with a consolidated view of the diagnosis and treatment. This innovation has reduced the time required to process claims by more than 50%.

eSanjeevani transforms healthcare access with innovative AI solutions

eSanjeevani, India’s National Telemedicine Service by the Ministry of Health and Family Welfare, has integrated AI-enabled tools to enhance care quality and streamline teleconsultations, promoting equitable access to healthcare across the country. Powered by Azure, it offers secure, scalable, and accessible doctor-to-doctor and doctor-to-patient teleconsultations. eSanjeevani is advancing its AI journey with Microsoft AI, enhancing productivity, data analysis, and user experience. These innovations are helping eSanjeevani set new benchmarks in telemedicine and digital healthcare services. It is also developing a proof of concept with Microsoft Copilot to transcribe doctor-patient conversations in real time for advanced speech analytics, aiding data-driven decisions. Serving more than 330 million patients, 98% from rural areas, eSanjeevani is today the world’s largest telemedicine initiative in primary healthcare.

AI for everyone in India

Satya Nadella speaking at the Microsoft AI Tour stop in India.
India AI Tour keynote with Satya Nadella, Chief Executive Officer.

India’s AI journey is not just about innovation, it’s about transformation across industries and lives. From travel to healthcare, banking to engineering, the case studies showcased here demonstrate the immense potential of AI when paired with the right tools, partnerships, and vision. Microsoft’s investments and technologies have enabled organizations in India to tackle challenges, streamline processes, and unlock new levels of efficiency and growth. As India continues to lead in the global AI revolution, these examples serve as a testament to how AI can create meaningful impact, fostering a future where innovation drives progress for everyone.

Find the resources to support your AI journey

The post Unleashing the power of AI in India appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/06/unleashing-the-power-of-ai-in-india/feed/ 0
Hear from Microsoft Security experts at these top cybersecurity events in 2025 http://approjects.co.za/?big=en-us/security/blog/2025/02/03/hear-from-microsoft-security-experts-at-these-top-cybersecurity-events-in-2025/ http://approjects.co.za/?big=en-us/security/blog/2025/02/03/hear-from-microsoft-security-experts-at-these-top-cybersecurity-events-in-2025/#respond Mon, 03 Feb 2025 17:00:00 +0000 If you’re looking to boost your skills and stay ahead of the threat landscape, join Microsoft Security at the top cybersecurity events in 2025.

The post Hear from Microsoft Security experts at these top cybersecurity events in 2025 appeared first on The Microsoft Cloud Blog.

]]>
Inspiration can spark in an instant when you’re at a conference. Perhaps you discover a new tool during a keynote that could save you hours of time. Or maybe a peer shares a story over coffee that makes you rethink an approach. One conversation, one session, or one event could give you fresh ideas, renewed excitement, and a vision for what to do next.

In the current AI landscape, inspiration and information are more important than ever for security professionals to stay ahead of threat actors. So if you’re looking to boost your skills and stay ahead of the threat landscape, join Microsoft Security at the top cybersecurity events in 2025.

Whether you join us at an industry staple like RSAC or one of our own events like Microsoft Secure, you can benefit in several key ways:

  • Get insights and strategies needed to overcome obstacles and drive your security initiatives forward with confidence.
  • See live demos of the latest products, product features, skills, and tools you can use in your work. Be among the first to hear about Microsoft Security innovations, such as Microsoft’s Secure Future Initiative and XSPA (cross-site port attack) updates attendees of Microsoft Ignite 2024 heard.
  • Learn from Microsoft Security experts on global threat intelligence.
  • Network with other like-minded security pros, learn best practices from your peers, and meet one-on-one with our experts.

Whatever your role, there’s an event for you and a path to successfully safeguarding your organization.

A group of men standing around a table with laptops

Microsoft at RSAC

From our signature Pre-Day to hands-on demos and one-on-one meetings, discover how Microsoft Security can give you the advantage you need in the era of AI.

Register now 

Conferences to inspire and engage everyone

Large crowd of people attending Microsoft Ignite in Chicago, November 2024.

Security professionals of all levels can benefit from attending one of the biggest cybersecurity events, including RSAC, Black Hat, plus two premier Microsoft events—Microsoft Secure (virtual) and Microsoft Ignite (in-person and virtual). If you love being the first to hear about Microsoft product innovations, don’t miss these Microsoft events with insights every security professional can put to good use.

Microsoft Secure

Date: April 9, 2025
Location: Online only

Microsoft Secure is Microsoft’s cybersecurity conference. This year’s one-hour digital showcase will spotlight AI-first, end-to-end security innovations with clear use cases and customer stories of how they use our tools daily. Attendees will deep-dive into cybersecurity products and strategies along with thousands of other cybersecurity professionals.

RSAC

Dates: April 27-May 1, 2025
Location: San Francisco, CA

RSAC 2025 is a can’t-miss security conference, bringing together more than 40,000 security professionals to discuss the latest cybersecurity challenges and innovation with the best of the best. With the theme of “Many Voices. One Community,” RSAC will feature keynotes, track sessions, interactive sessions, networking opportunities, and an expo designed to foster advanced security strategies.

Throughout RSAC, Microsoft Security will showcase end-to-end security innovations and share world class threat and regulatory intelligence to give you the advantage you need in the era of AI. From our signature Pre-Day to hands-on demos and one-on-one meetings, discover how Microsoft Security can give you the advantage you need in the era of AI.​ Check out the full Microsoft at RSAC experience.

Learn more about the Microsoft Events at RSA Conference 2025

Black Hat

Dates: August 2-7, 2025
Location: Las Vegas, NV

The Black Hat Conference is a premier learning event in the cybersecurity industry, known for its in-depth technical sessions and cutting-edge research presentations on topics like critical infrastructure and information security research news.

Microsoft is a key sponsor of the conference each year, where we showcase our latest discoveries and AI research on real-world problems and solutions. Last year, our AI Red Teaming in Practice training sessions and our AI Summit roundtables were a hit. Black Hat is also known for its security community celebrations, including the Cybersecurity Woman of the Year Awards and the Researcher celebrations, which we take part in every year.

Learn more about the Black Hat Conference 2025

Microsoft Ignite

Dates: November 17-21, 2025
Location: San Francisco, CA, and online

Microsoft Ignite is Microsoft’s biggest annual conference for developers, IT professionals, business leaders, security professionals, and partners. Thousands of security professionals like you attend every year to hear the biggest security product announcements from Microsoft Security and gain training and skilling to prepare for future advancements in AI. Security professionals of all levels can join interactive labs, workshops, keynotes, technical breakout sessions, demos, and more, led by Microsoft Security leaders and experts.

Over the past few years, we’ve really boosted Microsoft Security experiences at Microsoft Ignite. Last year, we hosted the Microsoft Ignite Security Forum for security leaders and two workshops on AI red teaming and Microsoft 365 Copilot deployment. Plus, we hosted more than 30 sessions demoing new features to help you secure your environment, use your favorite Microsoft tools safely and securely, and make sure your organizational processes prioritize security first.

If you attend Microsoft Ignite in person this year, you won’t want to miss our Security Leaders Dinner or the security community party. If you’re not able to attend in person, you can register for our virtual event.​ Sign up to learn more.

Learn more about Microsoft Ignite 2025

Events for security leaders and decision-makers

A woman presenting during the Microsoft AI Tour.

Microsoft AI Tour

Dates: Through May 30, 2025
Location: Multiple worldwide

The Microsoft AI Tour is a free, one-day event for executives that explores the ways AI can drive growth and create lasting value in multiple cities around the globe. Whether you’re a functional decision-maker who evaluates investments, an IT team member charged with security, or a CISO revamping your security strategy, there will be valuable security content tailored to your needs.

Microsoft Security’s top business leaders attend AI tour locations worldwide to share with you how Microsoft Security Copilot lets you protect at the speed and scale of AI. They are also available to meet with you.

Reserve your spot at an event near you

Event locationEvent date
Dubai, United Arab EmiratesFebruary 6, 2025
Singapore, Southeast AsiaFebruary 19, 2025
Tokyo, JapanFebruary 26-27, 2025
London, United KingdomMarch 5, 2025
Brussels, BelgiumMarch 25, 2025
Seoul, South KoreaMarch 26, 2025
Paris, FranceMarch 26, 2025
Madrid, SpainMarch 27, 2025
Tokyo, JapanMarch 27, 2025
Beijing, ChinaApril 23, 2025
Athens, GreeceMay 27-30, 2025

Gartner Security and Risk Management Summit

Dates: June 9-11, 2025
Location: National Harbor, MD

The Gartner Security and Risk Management Summit (Gartner SRM) explores trends in cybersecurity risk management, including the integration of generative AI, being an effective CISO, the importance of balancing response and recovery efforts with prevention, combating misinformation, and closing the cybersecurity skills gap to build a resilient workforce.

Microsoft Security executives host sessions at Gartner SRM to help you ensure the security of AI systems and adopt AI to drive innovation and efficiency. Our most popular topics center around securing and governing AI.

Learn more about the Gartner Security and Risk Management Summit

Events for technical and security practitioners

People attending the Microsoft booth at RSAC 2024.

Security teams look for conferences that provide specialized knowledge on the industry in which they work or on a narrow cybersecurity topic.

Legalweek

Dates: March 24-27, 2025
Location: New York, NY

Legalweek is a weeklong conference where approximately 6,000 members of the legal community will gather to network with their peers, explore emerging trends, spotlight the latest tech, and offer a roadmap through industry shifts. Topics explored at past Legalweek conferences include the ethical and regulatory impact of using your data to train AI, litigation in the age of cybersecurity, and maximizing efficiency and legal automation.  

This year, we’ll be sponsoring three sessions on AI and one on collaboration in complex litigation. As in years past, Microsoft is hosting an Executive Breakfast at Legalweek from 7:30 AM ET-8:45 AM ET on Tuesday, March 25, 2025. RSVP today and stop by Booth #3103 in New York Hilton Midtown Americas Hall 2 to learn more about the latest Microsoft Purview innovations. If you’d like to meet with our team while at Legalweek, sign up for a one-on-one meeting.

Learn more about Legalweek 2025

Identiverse

Dates: June 3-6, 2025
Location: Las Vegas, NV

Limiting access to AI, apps, and resources to those with the proper permissions is a crucial part of security. The Identiverse conference provides education, collaboration, and insight into the future of identity security. More than 2,500 attendees will share insights, develop new ideas, and advance the state of modern digital identity and security.

The event features sessions on best practices, industry trends, and latest technologies; an exhibition hall to showcase the latest identity solution innovations; and networking opportunities. Microsoft will host a booth where attendees can connect with Microsoft Security experts and leaders.

Learn more about Identiverse 2025

Events for developers

The cybersecurity talent shortage is requiring many to step up even if cybersecurity isn’t in their official job description. If you are an IT professional being tasked with cybersecurity or someone with an eagerness to learn cybersecurity tactics, join our Microsoft events aimed at helping you uplevel your cybersecurity skills.

Microsoft Build

Dates: May 19-22, 2025
Location: Seattle, WA

Security is a team sport and developers are increasingly the first string team members who build security into the development of applications. Microsoft Build Conference 2025 is Microsoft’s developer-focused event. It will showcase exciting updates and innovations from Microsoft Security for developers to create AI-enabled security solutions for their organizations.

The event includes connection opportunities, demos, and security-focused sessions. Past topics have included using AI to accelerate development processes, tools for enhancing the developer experience, and strategies for building in the cloud. Stay up to date on Microsoft Build news and find out when registration is open.

Learn more about the Microsoft Build Conference 2025

Find your inspiration at an event this year

Cybersecurity events foster a culture of continuous learning and adaptation, empowering you to stay ahead of emerging cyberthreats and maintain a resilient security posture. The ideas will flow freely at these events. Whether you attend one of the biggest conferences of the year or a smaller event (or both), you’ll be in good company. Microsoft Security will be there be, too, excited to share and eager to learn.

Hope to see you at a future event!

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Hear from Microsoft Security experts at these top cybersecurity events in 2025 appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2025/02/03/hear-from-microsoft-security-experts-at-these-top-cybersecurity-events-in-2025/feed/ 0
Making it easier for companies to build and ship AI people can trust https://news.microsoft.com/source/features/ai/making-it-easier-for-companies-to-build-and-ship-ai-people-can-trust/ https://news.microsoft.com/source/features/ai/making-it-easier-for-companies-to-build-and-ship-ai-people-can-trust/#respond Wed, 22 Jan 2025 16:00:00 +0000 Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves.

The post Making it easier for companies to build and ship AI people can trust appeared first on The Microsoft Cloud Blog.

]]>
Generative AI is transforming many industries, but businesses often struggle with how to create and deploy safe and secure AI tools as technology evolves. Leaders worry about the risk of AI generating incorrect or harmful information, leaking sensitive data, being hijacked by attackers or violating privacy laws — and they’re sometimes ill-equipped to handle the risks.  

“Organizations care about safety and security along with quality and performance of their AI applications,” says Sarah Bird, chief product officer of Responsible AI at Microsoft. “But many of them don’t understand what they need to do to make their AI trustworthy, or they don’t have the tools to do it.”  

To bridge the gap, Microsoft provides tools and services that help developers build and ship trustworthy AI systems, or AI built with security, safety and privacy in mind. The tools have helped many organizations launch technologies in complex and heavily regulated environments, from an AI assistant that summarizes patient medical records to an AI chatbot that gives customers tax guidance.  

The approach is also helping developers work more efficiently, says Mehrnoosh Sameki, a Responsible AI principal product manager at Microsoft. 

This post is part of Microsoft’s Building AI Responsibly series, which explores top concerns with deploying AI and how the company is addressing them with its responsible AI practices and tools.

“It’s very easy to get to the first version of a generative AI application, but people slow down drastically before it goes live because they’re scared it might expose them to risk, or they don’t know if they’re complying with regulations and requirements,” she says. “These tools expedite deployment and give peace of mind as you go through testing and safeguarding your application.”  

The tools are part of a holistic method that Microsoft provides for building AI responsibly, honed by expertise in identifying, measuring, managing and monitoring risk in its own products — and making sure each step is done. When generative AI first emerged, the company assembled experts in security, safety, fairness and other areas to identify foundational risks and share documentation, something it still does today as technology changes. It then developed a thorough approach for mitigating risk and tools for putting it into practice.  

The approach reflects the work of an AI Red Team that identifies emerging risks like hallucinations and prompt attacks, researchers who study deepfakesmeasurement experts who developed a system for evaluating AI, and engineers who build and refine safety guardrails. Tools include the open source framework PyRIT for red teams to identify risks, automated evaluations in Azure AI Foundry for continuously measuring and monitoring risks, and Azure AI Content Safety for detecting and blocking harmful inputs and outputs.  

Microsoft also publishes best practices for choosing the right model for an application, writing system messages and designing user experiences as part of building a robust AI safety system.  

“We use a defense-in-depth approach with many layers protecting against different types of risks, and we’re giving people all the pieces to do this work themselves,” Bird says. 

For the tax-preparation company that built a guidance chatbot, the capability to correct AI hallucinations was particularly important for providing accurate information, says Sameki. The company also made its chatbot more secure, safe and private with filters that block prompt attacks, harmful content and personally identifiable information.  

Making our own AI systems trustworthy is foundational in what we do, and we want to empower customers to do the same.

Sarah Bird, chief product officer of Responsible AI

She says the health care organization that created the summarization assistant was especially interested in tools for improving accuracy and creating a custom filter to make sure the summaries didn’t omit key information.  

“A lot of our tools help as debugging tools so they could understand how to improve their application,” Sameki says. “Both companies were able to deploy faster and with a lot more confidence.”  

Microsoft is also helping organizations improve their AI governance, a system of tracking and sharing important details about the development, deployment and operation of an application or model. Available in private preview in Azure AI Foundry, AI reports will give organizations a unified platform for collaborating, complying with a growing number of AI regulations and documenting evaluation insights, potential risks and mitigations.

“It’s hard to know that all the pieces are working if you don’t have the right governance in place,” says Bird. “We’re making sure that Microsoft’s AI systems are compliant, and we’re sharing best practices, tools and technologies that help customers with their compliance journey.”  

The work is part of Microsoft’s goal to help people do more with AI and share learnings that make the work easier for everyone.  

“Making our own AI systems trustworthy is foundational in what we do, and we want to empower customers to do the same,” Bird says. 

Learn more about Microsoft’s Responsible AI work.

Lead illustration by Makeshift Studios / Rocio Galarza. Story published on January 22, 2025

The post Making it easier for companies to build and ship AI people can trust appeared first on The Microsoft Cloud Blog.

]]>
https://news.microsoft.com/source/features/ai/making-it-easier-for-companies-to-build-and-ship-ai-people-can-trust/feed/ 0
Enhancing AI safety: Insights and lessons from red teaming http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/14/enhancing-ai-safety-insights-and-lessons-from-red-teaming/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/14/enhancing-ai-safety-insights-and-lessons-from-red-teaming/#respond Tue, 14 Jan 2025 16:00:00 +0000 Drawing from our experience, we’ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.

The post Enhancing AI safety: Insights and lessons from red teaming appeared first on The Microsoft Cloud Blog.

]]>
In an age where generative AI is transforming industries and reshaping daily interactions, helping ensure the safety and security of this technology is paramount. As AI systems grow in complexity and capability, red teaming has emerged as a central practice for identifying risks posed by these systems. At Microsoft, the AI red team (AIRT) has been at the forefront of this practice, red teaming more than 100 generative AI products since 2018. Along the way, we’ve gained critical insights into how to conduct red teaming operations, which we recently shared in our whitepaper, “Lessons From Red Teaming 100 Generative AI Products.”

This blog outlines the key lessons from the whitepaper, practical tips for AI red teaming, and how these efforts improve the safety and reliability of AI applications like Microsoft Copilot.

What is AI red teaming?

AI red teaming is the practice of probing AI systems for security vulnerabilities and safety risks that could cause harm to users. Unlike traditional safety benchmarking, red teaming focuses on probing end-to-end systems—not just individual models—for weaknesses. This holistic approach allows organizations to address risks that emerge from the interactions among AI models, user inputs, and external systems.

8 lessons from the front lines of AI red teaming

Drawing from our experience, we’ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.

1. Understand system capabilities and applications

AI red teaming should start by understanding how an AI system could be misused or cause harm in real-world scenarios. This means focusing on the system’s capabilities and where it could be applied, as different systems have different vulnerabilities based on their design and use cases. By identifying potential risks up front, red teams can prioritize testing efforts to uncover the most relevant and impactful weaknesses.

Example: Large language models (LLMs) are prone to generating ungrounded content, often referred to as “hallucinations.” However, the impact created by this weakness varies significantly depending on the application. For example, the same LLM could be used as a creative writing assistant and to summarize patient records in a healthcare context.

2. Complex attacks aren’t always necessary

Attackers often use simple and practical methods, like hand crafting prompts and fuzzing, to exploit weaknesses in AI systems. In our experience, relatively simple attacks that target weaknesses in end-to end systems are more likely to be successful than complex algorithms that target only the underlying AI model. AI red teams should adopt a system-wide perspective to better reflect real-world threats and uncover meaningful risks.

Example: Overlaying text on an image to trick an AI model into generating content that could aid in illegal activities.

Example of how overlaying text on an image can trick an AI model intro generating content that could aid in illegal activities—in this scenario, providing information on how to commit identity theft.
Figure 1. Example of an image jailbreak to generate content that could aid in illegal activities.

3. AI red teaming is not safety benchmarking

The risks posed by AI systems are constantly evolving, with new attack vectors and harms emerging as the technology advances. Existing safety benchmarks often fail to capture these novel risks, so red teams must define new categories of harm and consider how they can manifest in real-world applications. In doing so, AI red teams can identify risks that might otherwise be overlooked.

Example: Assessing how a state-of-the-art large language model (LLM) could be used to automate scams and persuade people to engage in risky behaviors.

4. Leverage automation for scale

Automation plays a critical role in scaling AI red teaming efforts by enabling faster and more comprehensive testing of vulnerabilities. For example, automated tools (which may, themselves, be powered by AI) can simulate sophisticated attacks and analyze AI system responses, significantly extending the reach of AI red teams. This shift from fully manual probing to red teaming supported by automation allows organizations to address a much broader range of risks.

What is PyRIT?

Learn more ↗

Example: Microsoft AIRT’s Python Risk Identification Tool (PyRIT) for generative AI, an open-source framework, can automatically orchestrate attacks and evaluate AI responses, reducing manual effort and increasing efficiency.

5. The human element remains crucial

Despite the benefits of automation, human judgment remains essential for many aspects of AI red teaming including prioritizing risks, designing system-level attacks, and assessing nuanced harms. In addition, many risks require subject matter expertise, cultural understanding, and emotional intelligence to evaluate, underscoring the need for balanced collaboration between tools and people in AI red teaming.

Example: Human expertise is vital for evaluating AI-generated content in specialized domains like CBRN (chemical, biological, radiological, and nuclear), testing low-resource languages with cultural nuance, and assessing the psychological impact of human-AI interactions.

6. Responsible AI risks are pervasive but complex

Harms like bias, toxicity, and the generation of illegal content are more subjective and harder to measure than traditional security risks, requiring red teams to be on guard against both intentional misuse and accidental harm caused by benign users. By combining automated tools with human oversight, red teams can better identify and address these nuanced risks in real-world applications.

Example: A text-to-image model that reinforces stereotypical gender roles, such as depicting only women as secretaries and men as bosses, based on neutral prompts.

This series of four images shows how a neutral text prompt inputted into in a text-to-image generator could result in an image that reinforces stereotypical gender roles.
Figure 2. Four images generated by a text-to-image model given the prompt “Secretary talking to boss in a conference room, secretary is standing while boss is sitting.”

7. LLMs amplify existing security risks and introduce new ones

Most AI red teams are familiar with attacks that target vulnerabilities introduced by AI models, such as prompt injections and jailbreaks. However, it is equally important to consider existing security risks and how these can manifest in AI systems including outdated dependencies, improper error handling, lack of input sanitization, and many other well-known vulnerabilities.

Example: Attackers exploiting a server-side request forgery (SSRF) vulnerability introduced by an outdated FFmpeg version in a video-processing generative AI application.

This illustration shows the step-by-step actions of a SSRF vulnerability in a generational AI video service and how an outdated FFmpeg version can make the service vulnerable to attack.
Figure 3. Illustration of the SSRF vulnerability in the generative AI application.

8. The work of securing AI systems will never be complete

AI safety is not just a technical problem; it requires robust testing, ongoing updates, and strong regulations to deter attacks and strengthen defenses. While no system can be entirely risk-free, combining technical advancements with policy and regulatory measures can significantly reduce vulnerabilities and increase the cost of attacks.

Example: Iterative “break-fix” cycles, which perform multiple rounds of red teaming and mitigation to ensure that defenses evolve alongside emerging threats.

The road ahead: Challenges and opportunities of AI red teaming

AI red teaming is still a nascent field with significant room for growth. Some pressing questions remain:

implement generative AI across the organization

Explore how ›

  • How can red teaming practices evolve to probe for dangerous capabilities in AI models like persuasion, deception, and self-replication?
  • How do we adapt red teaming practices to different cultural and linguistic contexts as AI systems are deployed globally?
  • What standards can be established to make red teaming findings more transparent and actionable?

Addressing these challenges will require collaboration across disciplines, organizations, and cultural boundaries. Open-source tools like PyRIT are a step in the right direction, enabling wider access to AI red teaming techniques and fostering a community-driven approach to AI safety.

Next steps: Building a safer AI future with AI red teaming

AI red teaming is essential for helping ensure safer, more secure, and responsible generative AI systems. As adoption grows, organizations must embrace proactive risk assessments grounded in real-world threats. By applying key lessons—like balancing automation with human oversight, addressing responsible AI harms, and prioritizing ethical considerations—red teaming helps build systems that are not only resilient but also aligned with societal values.

AI safety is an ongoing journey, but with collaboration and innovation, we can meet the challenges ahead. Dive deeper into these insights and strategies by reading the full whitepaper: Lessons From Red Teaming 100 Generative AI Products.

The post Enhancing AI safety: Insights and lessons from red teaming appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/14/enhancing-ai-safety-insights-and-lessons-from-red-teaming/feed/ 0
More value, less risk: How to implement generative AI across the organization securely and responsibly http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/ http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/#respond Mon, 04 Nov 2024 16:00:00 +0000 The technology landscape is undergoing a massive transformation, and AI is at the center of this change.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
The technology landscape is undergoing a massive transformation, and AI is at the center of this change—posing both new opportunities as well as new threats.  While AI can be used by adversaries to execute malicious activities, it also has the potential to be a game changer for organizations to help defeat cyberattacks at machine speed. Already today generative AI stands out as a transformative technology that can help boost innovation and efficiency. To maximize the advantages of generative AI, we need to strike a balance between addressing the potential risks and embracing innovation. In our recent strategy paper, “Minimize Risk and Reap the Benefits of AI,” we provide a comprehensive guide to navigating the challenges and opportunities of using generative AI.

Minimize Risk and Reap the Benefits of AI

Addressing security concerns and implementing safeguards

background pattern

According to a recent survey conducted by ISMG, the top concerns for both business executives and security leaders on using generative AI in their organization range, from data security and governance, transparency and accountability to regulatory compliance.1 In this paper, the first in a series on AI compliance, governance, and safety from the Microsoft Security team, we provide business and technical leaders with an overview of potential security risks when deploying generative AI, along with insights into recommended safeguards and approaches to adopt the technology responsibly and effectively.

Learn how to deploy generative AI securely and responsibly

In the paper, we explore five critical areas to help ensure the responsible and effective deployment of generative AI: data security, managing hallucinations and overreliance, addressing biases, legal and regulatory compliance, and defending against threat actors. Each section provides essential insights and practical strategies for navigating these challenges. 

An infographic displaying the top 5 security and business leader concerns: data security, hallucinations, threat actors, biases, and legal and regulatory

Data security

build a foundation for AI success

Explore governance ›

Data security is a top concern for business and cybersecurity leaders. Specific worries include data leakage, over-permissioned data, and improper internal sharing. Traditional methods like applying data permissions and lifecycle management can enhance security. 

Managing hallucinations and overreliance

Generative AI hallucinations can lead to inaccurate data and flawed decisions. We explore techniques to help ensure AI output accuracy and minimize overreliance risks, including grounding data on trusted sources and using AI red teaming. 

Defending against threat actors

Threat actors use AI for cyberattacks, making safeguards essential. We cover protecting against malicious model instructions, AI system jailbreaks, and AI-driven attacks, emphasizing authentication measures and insider risk programs. 

Grow Your Business with AI You Can Trust

Addressing biases

Reducing bias is crucial to help ensure fair AI use. We discuss methods to identify and mitigate biases from training data and generative systems, emphasizing the role of ethics committees and diversity practices.

Microsoft’s journey to redefine legal support with AI

All in on AI ›

Navigating AI regulations is challenging due to unclear guidelines and global disparities. We offer best practices for aligning AI initiatives with legal and ethical standards, including establishing ethics committees and leveraging frameworks like the NIST AI Risk Management Framework.

Explore concrete actions for the future

Explore security innovations

Microsoft at RSAC 2025 ↗

As your organization adopts generative AI, it’s critical to implement responsible AI principles—including fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability. In this paper, we provide an effective approach that uses the “map, measure, and manage” framework as a guide; as well as explore the importance of experimentation, efficiency, and continuous improvement in your AI deployment.

I’m excited to launch this series on AI compliance, governance, and safety with a strategy paper on minimizing risk and enabling your organization to reap the benefits of generative AI. We hope this series serves as a guide to unlock the full potential of generative AI while ensuring security, compliance, and ethical use—and trust the guidance will empower your organization with the knowledge and tools needed to thrive in this new era for business.

Additional resources

Get more insights from Bret Arsenault on emerging security challenges from his Microsoft Security blogs covering topics like next generation built-in security, insider risk management, managing hybrid work, and more.


1, 2 ISMG’s First annual generative AI study – Business rewards vs. security risks: Research report, ISMG.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on The Microsoft Cloud Blog.

]]>
http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/feed/ 0