Responsible AI Archives | Microsoft AI Blogs http://approjects.co.za/?big=en-us/ai/blog/topic/responsible-ai/ Wed, 19 Feb 2025 18:29:25 +0000 en-US hourly 1 Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/02/18/maximizing-ais-potential-insights-from-microsoft-leaders-on-how-to-get-the-most-from-generative-ai/ Tue, 18 Feb 2025 16:00:00 +0000 Get an overview of the 2025 AI Decision Brief, a Microsoft report on how generative AI is impacting businesses and how to maximize AI at your organization.

The post Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI appeared first on Microsoft AI Blogs.

]]>
Generative AI has been on a phenomenal growth trajectory over the past few years. We’re seeing businesses across industries using AI to increase productivity, streamline processes, and accelerate innovation. As generative AI applications continue to become more powerful, the question isn’t whether organizations will take advantage of AI, but how they can use it most effectively.

At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. In this age of generative AI, we’re committed to sharing what we’ve learned to help further this mission. That’s why we wrote the 2025 AI Decision Brief: Insights from Microsoft and AI leaders on navigating the generative AI platform shift

This report is packed with perspectives from top Microsoft leaders and insights from AI innovators, along with stories of companies across industries that have transformed their businesses using generative AI. It’s also full of pragmatic tips to help your company with its own AI efforts. 

Here’s a more detailed look at what you’ll find in the report.

The state of generative AI today 

The world has embraced generative AI with unprecedented speed. While it took seven years for the internet to reach 100 million users, ChatGPT reached those numbers in just two months.1 And although generative AI is relatively new to the market, adoption is rapidly expanding. In fact, current and planned usage among enterprises jumped to 75% in 2024 from 55% in 2023, according to an IDC study.2  

Put another way, AI is rapidly evolving into what economists call a general-purpose technology. But getting to the point where everyone on the planet has AI access and takes advantage of that access will require some effort, including: 

  • Committing to responsible, trustworthy AI.
    For all people, organizations, and nations to embrace AI, it must be responsible, ethical, fair, and safe. As Microsoft Vice Chair and President Brad Smith says in this report, “Broad social acceptance for AI will depend on ensuring that AI creates new opportunities for workers, respects enduring values of individuals, and addresses the impact of AI on local resources such as land, energy, and water.” 
  • Overcoming adoption challenges.
    Organizations face several challenges in adopting generative AI, such as skill shortages, security concerns, and regulation and compliance issues. Training employees to use AI and building data privacy, security, and compliance into your AI adoption plan are essential.
  • Understanding the winning formula.
    There’s a striking difference between customers in the AI exploration stage and those who have fully embraced it. The highest-performing organizations gain almost four times the value from their AI investments than those just getting started. Plus, those high performers are implementing generative AI projects in a fraction of the time.2

Where generative AI is headed

AI capabilities are doubling at a rate four times that of historical progress.2 This exponential growth tells us that the effects of AI-powered automation, scientific discovery, and innovation will also accelerate. We expect generative AI to revolutionize operations, enable new and disruptive business models, and reshape the competitive landscape in many ways, including:

  • The future of work.
    As the use of generative AI in companies continues to grow, employees are starting to collaborate with AI rather than just treating it as a tool. This means learning to work with AI iteratively and conversationally. “Effective collaboration involves setting expectations, reviewing work, and providing feedback—similar to managing an employee,” explains Jared Spataro, Microsoft Chief Marketing Officer, AI at Work. 
  • The organizations leading innovation.
    Startups, software development companies, research organizations, and co-innovation labs where startups and software giants collaborate on solutions will all continue to shape AI innovation.  
  • Sustainable AI.
    Generative AI is helping build a more sustainable future thanks to tools that integrate renewable energy into grids, reduce food waste, and support socially and environmentally beneficial actions.

How to advance generative AI in your organization 

As we help companies move from talking about AI to translating it into lasting results, we’ve gained a unique perspective on the generative AI strategies that drive business impact. You’ll find many of them in this report, including:

  • Best practices for using generative AI at scale.
    Get tips for developing a scalable AI strategy that best suits your organization, implementing your AI adoption plan, and managing your AI efforts over time. 
  • Ways to accelerate your AI readiness.
    Get checklists for creating your organization’s AI business strategy, technology and data strategy, implementation strategy, cultural and mindset shift, and governance plan. 
  • Customer success stories.
    See how businesses across industries—including healthcare, energy, transportation, and finance—are demonstrating what’s possible with AI now, and in the future. Plus, explore which Microsoft and AI tools they’re using to succeed.

Maximize generative AI with insights from Microsoft leaders

We couldn’t be more excited about the promise of generative AI. Whether you’ve already begun using AI at your organization or are just getting started, we’re here to help you ease the journey and maximize your results.

Get The 2025 AI Decision Brief now for Microsoft AI leadership perspectives on: 

  • Empowering the future: AI access for us all—Brad Smith, Vice Chair and President.
  • How AI is revolutionizing IT at Microsoft—Nathalie D’Hers, CVP Microsoft Digital (IT).
  • Learnings on the business value of AI from IDC—Alysa Taylor, Chief Marketing Officer, Commercial Cloud and AI.
  • The future of work is AI-powered—Jared Spataro, Chief Marketing Officer, AI at Work.
  • Microsoft’s commitment to supporting customers on their AI transformation journey—Judson Althoff, Executive Vice President and Chief Commercial Officer.
  • How software development companies are paving the way for AI transformation—Jason Graefe, Corporate Vice President, ISV and Digital Natives.
  • How to stay ahead of emerging challenges and cyberthreats—Vasu Jakkal, Corporate Vice President, Microsoft Security Business.
A blurry image of a screen

2025 AI Decision Brief

Empower your organization and learn how AI is reshaping businesses through insights shared by Microsoft leaders


1 Benj Edwards, “ChatGPT sets record for fastest-growing user base in history, report says: Intense demand for AI chatbot breaks records and inspires new $20/mo subscription plan,” Ars Technica, February 1, 2023.

2 IDC InfoBrief, sponsored by Microsoft, 2024 Business Opportunity of AI, IDC# US52699124, November 2024.

The post Maximizing AI’s potential: Insights from Microsoft leaders on how to get the most from generative AI appeared first on Microsoft AI Blogs.

]]>
Accelerate employee AI skilling: Insights from Microsoft http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/30/accelerate-employee-ai-skilling-insights-from-microsoft/ Thu, 30 Jan 2025 16:00:00 +0000 Our experience has yielded some widely applicable takeaways that can be helpful to organizations that want to build AI skills.

The post Accelerate employee AI skilling: Insights from Microsoft appeared first on Microsoft AI Blogs.

]]>
At Microsoft, we’ve become pioneers in the AI landscape by transforming our own organization. We’re customer zero—putting AI to work in all facets of our business and continuously exploring how this powerful technology can drive economic growth, maximize efficiency, and reduce operating costs. We’re also regularly evaluating and evolving how we coach employees as part of their continued AI skills development.

Although every organization’s AI transformation is unique and blueprints are scarce, we’ve learned that having the right skills across the organization is key. By implementing skill-building initiatives throughout the company, we’re reimagining how we work at Microsoft and aligning those initiatives to the functions that are critical to how we do business.

Through this process, we’re constantly uncovering valuable insights on how to lead by learning—often developing the playbooks from scratch. By applying these insights, we advance our AI transformation and benefit our workforce, customers, and partners around the world. We’re glad to share our findings with you to help your teams skill up to make the most of AI for innovation, growth, and opportunities.

Developing crucial AI skills for organizational transformation

Organizational transformation now requires AI-first skills; yet it can be challenging to plan modern and effective skill-building programs.

We understand the importance of providing our employees—both technical and non-technical—with the AI skills to grow and evolve with the business and the technology, along with the ability to apply these skills every day. Teams across Microsoft have established innovative and effective AI training programs that cater to specific roles in marketing, sales, engineering, and beyond.

Although there’s no one-size-fits-all approach to AI training, our experience has yielded some widely applicable takeaways which can be helpful to organizations that want to build AI skills. Our new e-book, 10 Best Practices to Accelerate Your Employees’ AI Skills: Lessons and experiences from Microsoft’s skilling initiatives, highlights some of the vital lessons we’ve learned that can help support you in implementing skill-building programs crucial to your AI transformation.

Sharing highlights from our AI learning experience

The e-book explores many of the lessons we’ve learned in our ongoing AI evolution. Our experiences can help inspire and inform your path forward, too, as you and your teams get skilled up and ready to power AI transformation with the Microsoft Cloud. In particular, the e-book showcases stories from AI skill-building initiatives implemented by four Microsoft teams:

  • Microsoft Marketing, a diverse collective of professionals, ranging from creative roles to business strategists and technical experts.
  • MCAPS Academy, the team responsible for training sellers globally within the Microsoft Customer and Partner Solutions (MCAPS) organization.
  • Worldwide Learning Engineering, the team tasked with architecting and building apps and platforms that support MCAPS and some of the Microsoft skill-building offerings for customers and partners.
  • The Microsoft Garage, an innovation platform that enables collaboration and experimentation through hackathons, workshops, talks, training sessions, and more.
An infographic that briefly describes the benefits of using AI for different roles at Microsoft, like marketing, sales, and engineering.
A functional approach to AI skill building at Microsoft.

Here’s what we learned.

1. Give space for exploration

Encourage a culture of learning by providing employees with the time and tools to explore AI.

Our Worldwide Learning Engineering team has dedicated time to delve into AI, and this fosters an environment where curiosity and innovation can thrive. Additionally, The Garage’s experiments, such as the SkillUp AI Challenge, provide employees with a sandbox for practical AI applications, encouraging both personal and professional growth.

2. Make learning fun

Create a low-pressure, engaging environment where employees can learn at their own pace.

The Garage’s SkillUp AI Challenge incorporates fun, interactive exercises that make AI relatable and enjoyable for all skill levels. Similarly, the Marketing AI practitioner hub offers gamified learning paths that enable marketers to integrate AI into their daily workflows in an entertaining way.

3. Provide clear, structured learning paths

Simplify the learning experience with structured paths tailored to different skill levels and roles.

MCAPS Academy Flight Plans offer role-specific learning paths, helping to ensure that technical and non-technical sales teams alike have clear directions for their AI learning. Moreover, the Marketing Learning team has developed a curriculum that supports marketers in becoming regular AI practitioners through well-defined learning stages.

4. Make it role specific

Adapt AI training programs to the unique needs of each role within the organization.

The Worldwide Learning Engineering team focuses on providing engineers with opportunities for deep technical engagement through dedicated learning time and advanced AI tools. At the same time, the MCAPS Academy addresses the specific needs of a different job role—sales—by blending foundational knowledge with real-world applications to enhance AI fluency.

5. Start with foundations

Is your organization prepared?


Assess your AI readiness

Begin AI training with foundational knowledge to help ensure that all employees have a solid understanding of AI basics.

The Marketing Learning team introduces marketers to AI through simple, foundational concepts before progressing to more complex applications. Likewise, the MCAPS Academy provides basic AI training to new hires before guiding them through more advanced, role-specific learning paths.

6. Have a plan to update the content regularly

Maintain the relevance of AI training programs by regularly updating content.

The Worldwide Learning Engineering team continuously refreshes its training materials to keep up with the latest advancements in AI technology. Meanwhile, The Garage schedules regular updates for its skill-building exercises to help ensure that they remain engaging and current.

7. Drive awareness and continued adoption

Promote ongoing AI learning and adoption through awareness campaigns and reinforcement.

The Marketing AI practitioner hub provides regular touchpoints to encourage consistent AI practice among marketers. Similarly, the MCAPS Academy uses newsletters and internal communications to keep the sales force informed and engaged in AI learning.

8. Set clear guidelines for responsible use

Establish and communicate guidelines for the responsible use of AI to maintain standards.

The Marketing Learning team’s curriculum emphasizes the importance of responsible AI use, providing clear guidelines and best practices. The Worldwide Learning Engineering team also integrates responsible AI principles into its training sessions, highlighting the significance of these considerations in AI development.

9. Let employees learn from each other

Facilitate peer-to-peer learning opportunities to enhance AI skills through collaboration.

The Garage hosts show-and-tell sessions where employees share their AI projects and insights. For engineers, the Worldwide Learning Engineering team organizes knowledge-sharing workshops to promote collaborative learning.

10. Leverage existing resources

Take advantage of available resources to support AI skill-building initiatives.

The MCAPS Academy makes the most of existing training platforms and materials, integrating them into its AI learning paths. And The Garage draws on external AI tools and resources to complement its interactive learning programs.

A close up of a white object

AI learning hub on Microsoft Learn

Get the skills to power your AI transformation

Building a foundation for the future of AI skilling

Our experiences as customer zero for AI training have been transformative—and we’re just getting started. By empowering our teams with the right skills, we’re not only driving innovation within our organization but also setting a strong foundation for the future, supporting our employees and customers, creating business value and growth, and fostering innovation.

As organizations around the world look to build AI skills and to scale this powerful technology throughout their business, we’re glad to share these insights to support your AI transformation. Together, we can lead in the AI-powered world and unlock new levels of value for our workforce, customers, and partners—today, tomorrow, and beyond.

The post Accelerate employee AI skilling: Insights from Microsoft appeared first on Microsoft AI Blogs.

]]>
Enhancing AI safety: Insights and lessons from red teaming http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/14/enhancing-ai-safety-insights-and-lessons-from-red-teaming/ Tue, 14 Jan 2025 16:00:00 +0000 Drawing from our experience, we’ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.

The post Enhancing AI safety: Insights and lessons from red teaming appeared first on Microsoft AI Blogs.

]]>
In an age where generative AI is transforming industries and reshaping daily interactions, helping ensure the safety and security of this technology is paramount. As AI systems grow in complexity and capability, red teaming has emerged as a central practice for identifying risks posed by these systems. At Microsoft, the AI red team (AIRT) has been at the forefront of this practice, red teaming more than 100 generative AI products since 2018. Along the way, we’ve gained critical insights into how to conduct red teaming operations, which we recently shared in our whitepaper, “Lessons From Red Teaming 100 Generative AI Products.”

This blog outlines the key lessons from the whitepaper, practical tips for AI red teaming, and how these efforts improve the safety and reliability of AI applications like Microsoft Copilot.

What is AI red teaming?

AI red teaming is the practice of probing AI systems for security vulnerabilities and safety risks that could cause harm to users. Unlike traditional safety benchmarking, red teaming focuses on probing end-to-end systems—not just individual models—for weaknesses. This holistic approach allows organizations to address risks that emerge from the interactions among AI models, user inputs, and external systems.

8 lessons from the front lines of AI red teaming

Drawing from our experience, we’ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.

1. Understand system capabilities and applications

AI red teaming should start by understanding how an AI system could be misused or cause harm in real-world scenarios. This means focusing on the system’s capabilities and where it could be applied, as different systems have different vulnerabilities based on their design and use cases. By identifying potential risks up front, red teams can prioritize testing efforts to uncover the most relevant and impactful weaknesses.

Example: Large language models (LLMs) are prone to generating ungrounded content, often referred to as “hallucinations.” However, the impact created by this weakness varies significantly depending on the application. For example, the same LLM could be used as a creative writing assistant and to summarize patient records in a healthcare context.

2. Complex attacks aren’t always necessary

Attackers often use simple and practical methods, like hand crafting prompts and fuzzing, to exploit weaknesses in AI systems. In our experience, relatively simple attacks that target weaknesses in end-to end systems are more likely to be successful than complex algorithms that target only the underlying AI model. AI red teams should adopt a system-wide perspective to better reflect real-world threats and uncover meaningful risks.

Example: Overlaying text on an image to trick an AI model into generating content that could aid in illegal activities.

Example of how overlaying text on an image can trick an AI model intro generating content that could aid in illegal activities—in this scenario, providing information on how to commit identity theft.
Figure 1. Example of an image jailbreak to generate content that could aid in illegal activities.

3. AI red teaming is not safety benchmarking

The risks posed by AI systems are constantly evolving, with new attack vectors and harms emerging as the technology advances. Existing safety benchmarks often fail to capture these novel risks, so red teams must define new categories of harm and consider how they can manifest in real-world applications. In doing so, AI red teams can identify risks that might otherwise be overlooked.

Example: Assessing how a state-of-the-art large language model (LLM) could be used to automate scams and persuade people to engage in risky behaviors.

4. Leverage automation for scale

Automation plays a critical role in scaling AI red teaming efforts by enabling faster and more comprehensive testing of vulnerabilities. For example, automated tools (which may, themselves, be powered by AI) can simulate sophisticated attacks and analyze AI system responses, significantly extending the reach of AI red teams. This shift from fully manual probing to red teaming supported by automation allows organizations to address a much broader range of risks.

What is PyRIT?


Learn more

Example: Microsoft AIRT’s Python Risk Identification Tool (PyRIT) for generative AI, an open-source framework, can automatically orchestrate attacks and evaluate AI responses, reducing manual effort and increasing efficiency.

5. The human element remains crucial

Despite the benefits of automation, human judgment remains essential for many aspects of AI red teaming including prioritizing risks, designing system-level attacks, and assessing nuanced harms. In addition, many risks require subject matter expertise, cultural understanding, and emotional intelligence to evaluate, underscoring the need for balanced collaboration between tools and people in AI red teaming.

Example: Human expertise is vital for evaluating AI-generated content in specialized domains like CBRN (chemical, biological, radiological, and nuclear), testing low-resource languages with cultural nuance, and assessing the psychological impact of human-AI interactions.

6. Responsible AI risks are pervasive but complex

Harms like bias, toxicity, and the generation of illegal content are more subjective and harder to measure than traditional security risks, requiring red teams to be on guard against both intentional misuse and accidental harm caused by benign users. By combining automated tools with human oversight, red teams can better identify and address these nuanced risks in real-world applications.

Example: A text-to-image model that reinforces stereotypical gender roles, such as depicting only women as secretaries and men as bosses, based on neutral prompts.

This series of four images shows how a neutral text prompt inputted into in a text-to-image generator could result in an image that reinforces stereotypical gender roles.
Figure 2. Four images generated by a text-to-image model given the prompt “Secretary talking to boss in a conference room, secretary is standing while boss is sitting.”

7. LLMs amplify existing security risks and introduce new ones

Most AI red teams are familiar with attacks that target vulnerabilities introduced by AI models, such as prompt injections and jailbreaks. However, it is equally important to consider existing security risks and how these can manifest in AI systems including outdated dependencies, improper error handling, lack of input sanitization, and many other well-known vulnerabilities.

Example: Attackers exploiting a server-side request forgery (SSRF) vulnerability introduced by an outdated FFmpeg version in a video-processing generative AI application.

This illustration shows the step-by-step actions of a SSRF vulnerability in a generational AI video service and how an outdated FFmpeg version can make the service vulnerable to attack.
Figure 3. Illustration of the SSRF vulnerability in the generative AI application.

8. The work of securing AI systems will never be complete

AI safety is not just a technical problem; it requires robust testing, ongoing updates, and strong regulations to deter attacks and strengthen defenses. While no system can be entirely risk-free, combining technical advancements with policy and regulatory measures can significantly reduce vulnerabilities and increase the cost of attacks.

Example: Iterative “break-fix” cycles, which perform multiple rounds of red teaming and mitigation to ensure that defenses evolve alongside emerging threats.

The road ahead: Challenges and opportunities of AI red teaming

AI red teaming is still a nascent field with significant room for growth. Some pressing questions remain:

implement generative AI across the organization


Explore how

  • How can red teaming practices evolve to probe for dangerous capabilities in AI models like persuasion, deception, and self-replication?
  • How do we adapt red teaming practices to different cultural and linguistic contexts as AI systems are deployed globally?
  • What standards can be established to make red teaming findings more transparent and actionable?

Addressing these challenges will require collaboration across disciplines, organizations, and cultural boundaries. Open-source tools like PyRIT are a step in the right direction, enabling wider access to AI red teaming techniques and fostering a community-driven approach to AI safety.

Next steps: Building a safer AI future with AI red teaming

AI red teaming is essential for helping ensure safer, more secure, and responsible generative AI systems. As adoption grows, organizations must embrace proactive risk assessments grounded in real-world threats. By applying key lessons—like balancing automation with human oversight, addressing responsible AI harms, and prioritizing ethical considerations—red teaming helps build systems that are not only resilient but also aligned with societal values.

AI safety is an ongoing journey, but with collaboration and innovation, we can meet the challenges ahead. Dive deeper into these insights and strategies by reading the full whitepaper: Lessons From Red Teaming 100 Generative AI Products.

The post Enhancing AI safety: Insights and lessons from red teaming appeared first on Microsoft AI Blogs.

]]>
Driving inclusion and accessibility with Microsoft 365 Copilot http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/13/driving-inclusion-and-accessibility-with-microsoft-365-copilot/ Mon, 13 Jan 2025 16:00:00 +0000 Nonprofits are using Microsoft 365 Copilot to increase inclusion internally and champion equity in the communities they serve.

The post Driving inclusion and accessibility with Microsoft 365 Copilot appeared first on Microsoft AI Blogs.

]]>
Mission-driven organizations seek to make meaningful, positive change in the world, and the best of these also strive to embody progress within their organizations. Technology, particularly AI, has the potential to accelerate this work by engaging different perspectives, overcoming barriers to participation, and amplifying progress. Nonprofits are using Microsoft 365 Copilot to increase inclusion internally and champion equity in the communities they serve.

Nonprofits embody innovation. Sometimes their limited resources inspire a creative solution, leading to better outcomes than conventional action. Other times, nonprofits’ close ties to impacted communities—in other words, people in the field prompt insights. These days, mission-driven organizations that embrace innovation through technology have the tools to further increase their impact.

Abstract image

Microsoft AI in action

Explore how Microsoft AI and Microsoft 365 Copilot can transform your organization

AI is particularly well suited to accelerate the values many nonprofits promote, such as inclusion and equity. Two mission-driven organizations we partner with at Microsoft Tech for Social Impact exemplify this dedication. Arapahoe Libraries, a library system in Colorado serving residents across 800 square miles through its eight community libraries, jail library, and Bookmobile, and the McKnight Foundation, a private family foundation based in Minnesota dedicated to climate action and racial equity, are both early adopters of Microsoft 365 Copilot. They are using the AI assistant on two important fronts. Firstly, they are walking the walk of their missions by applying Copilot to increase inclusion internally. In addition, they are leveraging AI to boost productivity and creativity, freeing up staff to innovate for greater progress. 

As social impact organizations tackle a host of persistent challenges, AI is a valuable tool to experiment, promote justice, and include a wealth of perspectives. As McKnight Foundation Senior Communications Officer Trisha Harms says, “We need to steward our resources effectively and responsibly. Copilot is one tech solutions we use that allows everyone to connect, align, and move forward on our mission.”

Advancing inclusion internally

Both Arapahoe Libraries and the McKnight Foundation are deeply committed to ensuring their staff and partners can equally participate in and contribute to their important work.

“We know every single person in this organization has a diverse, important perspective that helps them serve our patrons.”

Anthony White, Arapahoe Libraries Director of Innovation and Technology

Built-in features across the Microsoft stack, including Copilot, help Arapahoe Libraries comply with a recently enacted Colorado accessibility law and advance the organization’s internal accessibility framework. The organization invested in Copilot licenses for every employee so they can all benefit from the AI assistant.

Staff use Copilot to search across internal platforms, including Microsoft SharePoint, Outlook, and Microsoft Teams. Complex questions used to take 3 to 5 minutes to answer; now the AI assistant surfaces answers in less than 15 seconds. Similarly, Copilot recaps content across Teams and users’ inboxes into “easily digestible, not overwhelming” summaries. These time-saving uses help all staff apply their talents and expertise to their jobs.

Small and medium businesses (SMBs) using Copilot experience or anticipate an 18% increase in employee satisfaction on average. This rings true for Arapahoe Libraries. Assisting with repetitive and manual tasks enables library staff to cut through information overwhelm and add their unique perspectives. AI also enables them to easily locate the resources they need to do their jobs to the best of their ability.

“We see so much time savings, it’s creating a level of transparency and accessibility across all our teams that we didn’t have before.”

Anthony White, Arapahoe Libraries Director of Innovation and Technology

Similarly, the McKnight Foundation is using Copilot to democratize organizational knowledge. Staff can now more easily search for, synthesize, and add to the foundation’s documents. This allows staff to learn from each other and contribute their expertise, which in turn becomes more easily findable for others.

Copilot also suggests ways to improve the accessibility of presentations, graphics, and documents. For example, Copilot will recommend adjusting the colors used in a Microsoft PowerPoint presentation to make it more accessible to colleagues who are low-vision or color blind. This coaching helps staff who are less familiar with accessibility guidelines to create content that enables the participation of all staff and partners.  

“We really do care about making sure every staff person feels included and like they belong, technology is one part of our holistic approach to making sure everything we do drives our mission forward.”

Trisha Harms, Senior Communications Officer, McKnight Foundation

Promoting equity in communities

3 ways social impact organizations can leverage ai


Read the blog

Arapahoe Libraries and the McKnight Foundation are dedicated to promoting equity in their communities. Arapahoe Libraries is working to eliminate gaps in access to library services, for example by bringing books to incarcerated individuals, automatically providing students library cards so they can access digital content, and placing library “Lending Machines” in locations with limited library access.

To further identify and bridge gaps, Arapahoe Libraries is using Copilot to categorize and find themes among 64,000 pieces of patron feedback. These evaluations, comments, and requests used to be siloed by branch and program. Now, the AI assistant is working through the treasure trove of information to distill ways to improve across the library district. The organization will use the data to best meet the community’s changing needs and ensure all community members can benefit from the libraries’ services.

In addition, Arapahoe Libraries directed Copilot to review its policies for accessibility concerns. For instance, the organization is planning to roll out AI-enhanced Surface laptops for checkout. The Copilot review identified disparities in some patrons’ ability to travel to physical library locations, pinpointing opportunities to improve technology access across the district. In short, “Copilot has helped us identify gaps in our policies so we can better serve our patrons,” White says.

The McKnight Foundation is also dedicated to equity, both locally and across the world. The foundation supports projects that empower Native nations through renewable energy infrastructure, increase home ownership for diverse communities, cultivate resilient food systems globally, and much more. The foundation’s ethical AI journey prioritized using a tool that would not plagiarize intellectual property. The foundation chose Copilot, which runs on a model that does not draw from the public domain. This enables the McKnight Foundation to apply the benefits of an AI assistant without appropriating others’ output.

The McKnight Foundation has found that Copilot has made an enormous difference for the nonprofit by saving time and kickstarting the creative process. By streamlining day-to-day operations and overcoming creative blocks, the AI assistant helps staff focus on their mission—from advancing climate justice to fueling economic mobility.

The foundation is far from alone in this benefit, which also affects budgets and therefore resources available for mission-focused activities. More than half of SMBs report that their operating costs have decreased 1% to 20% since adopting Copilot. “Any way we can increase efficiency and productivity means we can do more for and with our grantee partners,” Harms says.

The more dedicated energy and time staff invest in mission-advancing projects, the greater impact they can have on equity locally, regionally, and globally. As Harms says, “It makes a big difference for how much effectiveness we can have outside our walls.

Explore AI solutions for nonprofits

Profile image of person sitting in front of a wall of plants

Microsoft for Nonprofits

Empower your nonprofit with AI

Learn more about how Microsoft is supporting nonprofits, see how other organizations are using AI to drive impact, and get more information about how you can safely and securely deploy AI to support your business needs.

The post Driving inclusion and accessibility with Microsoft 365 Copilot appeared first on Microsoft AI Blogs.

]]>
Explore the business case for responsible AI in new IDC whitepaper http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/06/explore-the-business-case-for-responsible-ai-in-new-idc-whitepaper/ Mon, 06 Jan 2025 18:00:00 +0000 This whitepaper, based on IDC’s Worldwide Responsible AI Survey sponsored by Microsoft, offers guidance to business and technology leaders on how to systematically build trustworthy AI.

The post Explore the business case for responsible AI in new IDC whitepaper appeared first on Microsoft AI Blogs.

]]>
I am pleased to introduce Microsoft’s commissioned whitepaper with IDC: The Business Case for Responsible AI. This whitepaper, based on IDC’s Worldwide Responsible AI Survey sponsored by Microsoft, offers guidance to business and technology leaders on how to systematically build trustworthy AI. In today’s rapidly evolving technological landscape, AI has emerged as a transformative force, reshaping industries and redefining the way businesses operate. Generative AI usage jumped from 55% in 2023 to 75% in 2024; the potential for AI to drive innovation and enhance operational efficiency is undeniable.1 However, with great power comes great responsibility. The deployment of AI technologies also brings with it significant risks and challenges that must be addressed to ensure responsible use.

The Business Case for Responsible AI: Read the new whitepaper from Microsoft and IDC

At Microsoft, we are dedicated to enabling every person and organization to use and build AI that is trustworthy, which means AI that is private, safe, and secure. You can learn more about our commitments and capabilities in our announcement about trustworthy AI. Our approach to safe AI, or responsible AI, is grounded in our core values, risk management and compliance practices, advanced tools and technologies, and the dedication of individuals committed to deploying and using generative AI responsibly.

We believe that a responsible AI approach fosters innovation by ensuring that AI technologies are developed and deployed in a manner that is fair, transparent, and accountable. IDC’s Worldwide Responsible AI Survey found that 91% of organizations are currently using AI technology and expect more than a 24% improvement in customer experience, business resilience, sustainability, and operational efficiency due to AI in 2024. In addition, organizations that use responsible AI solutions reported benefits such as improved data privacy, enhanced customer experience, confident business decisions, and strengthened brand reputation and trust. These solutions are built with tools and methodologies to identify, assess, and mitigate potential risks throughout their development and deployment.

AI is a critical enabler of business transformation, offering unprecedented opportunities for innovation and growth. However, the responsible development and use of AI is essential to mitigate risks and build trust with customers and stakeholders. By adopting a responsible AI approach, organizations can align AI deployment with their values and societal expectations, resulting in sustainable value for both the organization and its customers.

Key findings from the IDC survey

The IDC Worldwide Responsible AI Survey highlights the importance of operationalizing responsible AI practices:

  • More than 30% of respondents noted that the lack of governance and risk management solutions is the top barrier to adopting and scaling AI.
  • More than 75% of respondents who use responsible AI solutions reported improvements in data privacy, customer experience, confident business decisions, brand reputation, and trust.
  • Organizations are increasingly investing in AI and machine learning governance tools and professional services for responsible AI, with 35% of AI organization spend in 2024 allocated to AI and machine learning governance tools and 32% to professional services.

In response to these findings, IDC suggests that a responsible AI organization is built on four foundational elements: core values and governance, risk management and compliance, technologies, and workforce.

  1. Core values and governance: A responsible AI organization defines and articulates its AI mission and principles, supported by corporate leadership. Establishing a clear governance structure across the organization builds confidence and trust in AI technologies.
  2. Risk management and compliance: Strengthening compliance with stated principles and current laws and regulations is essential. Organizations must develop policies to mitigate risk and operationalize those policies through a risk management framework with regular reporting and monitoring.
  3. Technologies: Utilizing tools and techniques to support principles such as fairness, explainability, robustness, accountability, and privacy is crucial. These principles must be built into AI systems and platforms.
  4. Workforce: Empowering leadership to elevate responsible AI as a critical business imperative and providing all employees with training on responsible AI principles is paramount. Training the broader workforce ensures responsible AI adoption across the organization.

Read the whitepaper: The Business Case for Responsible AI

Advice and recommendations for business and technology leaders

To ensure the responsible use of AI technologies, organizations should consider taking a systematic approach to AI governance. Based on the research, here are some recommendations for business and technology leaders. It is worth noting that Microsoft has adopted these practices and is committed to working with customers on their responsible AI journey:

  1. Establish AI principles: Commit to developing technology responsibly and establish specific application areas that will not be pursued. Avoid creating or reinforcing unfair bias and build and test for safety. Learn how Microsoft builds and governs AI responsibly.
  2. Implement AI governance: Establish an AI governance committee with diverse and inclusive representation. Define policies for governing internal and external AI use, promote transparency and explainability, and conduct regular AI audits. Read the Microsoft Transparency Report.
  3. Prioritize privacy and security: Reinforce privacy and data protection measures in AI operations to safeguard against unauthorized data access and ensure user trust. Learn more about Microsoft’s work to implement generative AI across the organization securely and responsibly.
  4. Invest in AI training: Allocate resources for regular training and workshops on responsible AI practices for the entire workforce, including executive leadership. Visit Microsoft Learn and find courses on generative AI for business leaders, developers, and machine learning professionals.
  5. Stay abreast of global AI regulations: Keep up-to-date with global AI regulations, such as the EU AI Act, and ensure compliance with emerging requirements. Stay up-to-date with requirements at Microsoft Trust Center.

As organizations continue to integrate AI into business processes, it is important to remember that responsible AI is a strategic advantage. By embedding responsible AI practices into the core of their operations, organizations can drive innovation, enhance customer trust, and support long-term sustainability. Organizations that prioritize responsible AI may be better positioned to navigate the complexities of the AI landscape and capitalize on the opportunities it presents to reinvent the customer experience or bend the curve on innovation.

At Microsoft, we are committed to supporting our customers on their responsible AI journey. We offer a range of tools, resources, and best practices to help organizations implement responsible AI principles effectively. In addition, we are leveraging our partner ecosystem to provide customers with market and technical insights designed to enable deployment of responsible AI solutions on the Microsoft platform. By working together, we can create a future where AI is used responsibly benefiting both businesses and society as a whole.

As organizations navigate the complexities of AI adoption, it is important to make responsible AI an integrated practice across the organization. By doing so, organizations can harness the full potential of AI while using it in a manner that is fair and beneficial for all.

Discover solutions


1IDC’s 2024 AI opportunity study: Top five AI trends to watch, Alysa Taylor. November 14, 2024.

IDC White Paper: sponsored by Microsoft, 2024 The Business Case for Responsible AI, IDC #US52727124, December 2024. The study was commissioned and sponsored by Microsoft. This document is provided solely for information and should not be construed as legal advice.

The post Explore the business case for responsible AI in new IDC whitepaper appeared first on Microsoft AI Blogs.

]]>
Collaborating for impact: How AI is transforming Australia and New Zealand industries http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2025/01/06/collaborating-for-impact-how-ai-is-transforming-australia-and-new-zealand-industries/ Mon, 06 Jan 2025 16:00:00 +0000 The AI Tour in Sydney showcased how visionary Australian organizations are already revolutionizing industries.

The post Collaborating for impact: How AI is transforming Australia and New Zealand industries appeared first on Microsoft AI Blogs.

]]>
This blog is part of the AI worldwide tour series, which highlights customers from around the globe who are embracing AI to achieve more. Read about how customers are using responsible AI to drive social impact and business transformation with Global AI innovation.

Sydney, Australia, recently played hosts to the Microsoft AI Tour, bringing together innovators, industry leaders, and government representatives to reinforce the extraordinary opportunity AI represents for Australia and discuss how we can shape the future of the country’s AI economy.

Microsoft also launched the first New Zealand hyperscale cloud in Aotearoa with major sustainability and skilling investments. Microsoft signed a long-term renewable energy contract and will run the latest water-free technology to cool the datacenter. The cloud region offers local data residency, enhanced security, and reduced latency, empowering New Zealand to leverage technology more efficiently at an unprecedented scale. The opening of Microsoft’s hyperscale cloud region marks the most significant milestone in the company’s nearly 40-year history in New Zealand and brings unprecedented opportunities for local organizations. 

As captured in a recent IDC study, the business potential of AI continues to accelerate across the globe, with generative AI adoption surging from 55% in 2023 to 75% in 2024. Companies are seeing a remarkable $3.7 return on investment for every $1 spent on generative AI, with deployments delivering value faster than ever—often within 13 months.1 Recognizing AI’s transformative power, organizations are rapidly advancing their strategies, shifting from pre-built solutions to sophisticated, custom-built AI workloads within two years—highlighting AI’s pivotal role in shaping the competitive edge of the future.

Despite this rapid growth, a lack of technical and practical AI skills remains the top barrier for Australia, highlighting the need for targeted skilling to unlock AI’s full potential. Australia’s government and business leaders are both committed to closing that gap, partnering with Microsoft to provide AI and digital skills training to 1 million Australians and New Zealanders by 2026.

AI Office Hours

Catch up on core AI concepts

background pattern

AI for everyone in Australia

As Australia emerges as a leader in the global AI economy, the nation’s strengths in applications, AI datacenters, and data position it to drive transformative growth across industries. The AI Tour highlighted that AI transformation is not just about technology—it’s a collaborative effort.

From redefining insurance through the Suncorp’s AI-powered customer support tools to advancing retail innovation with Coles’ AI-as-a-service platform and improving public safety through AI solutions with the Australian Federal Police, Australian organizations are leveraging AI to reshape how they operate and serve their communities.

Businesses, governments, educators, and not-for-profits must work together to ensure AI serves all Australians safely and responsibly. With the right focus on infrastructure, skills, security, and responsible AI, the possibilities are limitless.

Brisbane Catholic Education rolls out Microsoft 365 Copilot to 12,500 educators

Brisbane Catholic Education (BCE) has announced the world’s largest generative AI rollout in kindergarten through twelfth grade education, with a plan to provide Microsoft 365 Copilot to 12,500 educators and support staff. In an initial trial, educators reported saving an average of 9.3 hours per week by streamlining administrative tasks, information searches, and lesson planning.

BCE leveraged Microsoft Copilot Studio to create a generative AI tool that helps educators integrate Catholic traditions and values into the classroom. Drawing from BCE’s Catholic identity site, theological database, and religious education curricula, the tool ensures all staff, regardless of their Catholic background, can easily access guidance for applying a Catholic lens to lesson planning and life skills discussions with students.

Coles Group leverages AI to reimagine the grocery experience

Coles Group is revolutionizing the grocery business by leveraging advanced AI and cloud technologies to redefine operations and elevate customer experiences. Confronted with rising competition and shifting customer expectations, Coles turned to AI.

Among its standout AI initiatives is Tell Coles, a generative AI model that deciphers customer feedback to offer actionable insights for store managers, ensuring swift and meaningful improvements. Meanwhile, digital chef delivers hyper-personalized recipes and cooking tips in real time, powered by Microsoft Azure machine learning models.

Internally, Microsoft AI copilots provide team members with intuitive, real-time tools that streamline workflows and enhance productivity, enabling precision in inventory and operations across 850 stores, and resulting in heightened customer engagement, improved sustainability practices, and a blueprint for digital excellence in retail.

Petbarn’s AI assistant brings personalized pet care to customers

Petbarn has introduced PetAI, a generative AI solution designed to help pet owners keep their pets happy and healthy. Built with Azure OpenAI Service, Azure AI Search, and Azure App Service, PetAI acts as an intelligent assistant, offering personalized pet care advice and tailored product recommendations.

Launched on Petbarn’s website in October 2024, PetAI quickly gained traction, with thousands of customers embracing its functionality. Now integrated into the new Petbarn App, it provides a comprehensive, AI-powered approach for managing pet wellbeing, setting a new standard in personalized pet care.

Suncorp accelerates AI revolution in the insurance industry

Suncorp Group is transforming the insurance industry with AI integration across its operations. Leveraging the latest Microsoft AI capabilities at scale, Suncorp has more than 120 AI use cases in development to enhance both customer experience and employee satisfaction.

Among these innovations is Smart Knowledge, which analyzes thousands of articles to deliver relevant information to Suncorp’s contact center team, enabling faster and more accurate customer support. Additionally, Suncorp has implemented an Azure OpenAI Service-based solution that provides claims managers with a unified view of each insurance claim, reducing time spent tracking information across systems and shortening claim lifecycles by 9%.

To further improve employee experiences, Suncorp has rolled out Microsoft 365 Copilot alongside its AI+U Academy, a training initiative designed to empower staff to effectively use AI tools in their daily work. These efforts not only enhance employee satisfaction but also ensure exceptional outcomes for customers, solidifying Suncorp’s position as an industry leader in AI-powered innovation.

The Australian Federal Police (AFP) leverages AI to better protect Australians

Australia’s national policing agency, the Australian Federal Police (AFP), is expanding its partnership with Microsoft to develop custom AI solutions built on Azure AI services. With 7,000 staff members tasked with investigating federal crimes across Australia and the Australian Capital Territory, the AFP is leveraging AI to detect deepfake images and other problematic content. This work has shown particular promise in child protection, where AI has already enabled law enforcement to track child predators and rescue victims more effectively.

Along with 50 other Australian Public Service agencies, the AFP is trialing Microsoft 365 Copilot, which has demonstrated early gains in officers’ efficiency by automating document and report creation. Further, the agency is exploring how AI can safeguard officers’ mental health, using generative AI to create text summaries of sensitive material and modify graphic images to reduce their psychological impact.

To address ethical and community concerns, the AFP has established a Responsible and Ethical AI Framework, drawing on Microsoft’s principles to ensure AI is implemented with diligence, accountability, and strong human oversight. These initiatives position the AFP as a leader in responsible AI use within law enforcement.

Find the resources to support your AI journey

Australia is only beginning to unlock the immense potential of AI, yet the AI Tour in Sydney showcased how visionary Australian organizations are already revolutionizing industries, from education and policing to banking and insurance. Let’s embrace this opportunity, together.


1IDC InfoBrief: sponsored by Microsoft, 2024 Business Opportunity of AI, IDC# US52699124, November 2024.

The post Collaborating for impact: How AI is transforming Australia and New Zealand industries appeared first on Microsoft AI Blogs.

]]>
Seizing the AI opportunity: How to transform Canada’s economy by 2030 http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/12/16/seizing-the-ai-opportunity-how-to-transform-canadas-economy-by-2030/ Mon, 16 Dec 2024 16:00:00 +0000 Canada is not only addressing present challenges but also paving the way for a future where AI drives meaningful innovation and transformation.

The post Seizing the AI opportunity: How to transform Canada’s economy by 2030 appeared first on Microsoft AI Blogs.

]]>
This blog is part of the AI worldwide tour series, which highlights customers from around the globe who are embracing AI to achieve more. Read about how customers are using responsible AI to drive social impact and business transformation with Global AI innovation.

Generative AI adoption is officially on the rise in Canada. According to KPMG’s Generative AI Adoption Index, nearly half of Canadians use generative AI in their jobs—65% of those using it daily—however, only 18% of Canadian employers report having formally deployed AI tools to their workforce and putting guiding policies in place.1 Further, Canada trails its peers in generative AI adoption and public trust, highlighting significant untapped potential.

In fact, with $187 billion in potential economic impact by 2030, plus $7 billion from innovative generative AI products and services, Canada stands at a pivotal moment in its digital transformation journey. Many leaders recognize the urgency of accelerating AI adoption to unlock this opportunity and remain globally competitive. As a result, the nation is advancing responsible AI practices, workforce development, and infrastructure growth through strategic partnerships between government, public organizations, and industry leaders like Microsoft. These collaborations aim to position Canada as a global leader in harnessing AI for both economic growth and societal progress.

This vision was brought to life at the recent Microsoft AI Tour stop in Toronto, Canada, where forward-thinking Canadian organizations showcased how they are embracing AI to tackle industry challenges and seize opportunities. Through these efforts, Canada is not only addressing present challenges but also paving the way for a future where AI drives meaningful innovation and transformation.

Ottawa Hospital turns to AI to reduce clinician burnout

The Ottawa Hospital (TOH) is the first Canadian hospital to trial DAX Copilot—a Microsoft solution that uses AI to create draft clinical notes for physicians during patient appointments. By saving physicians time and effort in preparing patient charts, the hospital hopes to increase access to care for patients and reduce physician burnout.

DAX Copilot uses advanced AI to securely record doctor-patient conversations, transcribe into medical notes for review, and upload to the hospital’s electronic health records system. Patients provide consent and can also access their appointment notes through their online patient portal. DAX Copilot reduces administrative burdens and addresses the pressing issue of clinician burnout, letting Ottawa Hospital physicians spend more time with patients. Backed by Microsoft’s responsible AI principles, it streamlines documentation for better care.

Metrolinx PRESTO payments leverages AI to attune to resident needs and enhance customer experience

Metrolinx, Ontario’s public transportation agency, is embracing the transformative power of AI to enhance operational efficiency, improve decision-making, and ensure a secure and seamless experience for commuters.

Metrolinx PRESTO is using generative AI and machine learning to analyze and categorize free-text survey responses, making it easier to understand and respond to resident feedback. This approach reduces manual effort, minimizes bias, and provides faster, more actionable insights. By streamlining data processing and simplifying deployment, the system enhances decision-making and helps Metrolinx stay flexible and customer focused.

PRESTO is further transforming customer interactions through the implementation of PRESTO’s AI-powered B2C Webchat Copilot. Designed to provide a frictionless, inclusive, and accessible experience, empowering users to easily resolve issues, payment inquiries, and additional self serve capability. Integrated into the PRESTO website and app, the chatbot addresses customer needs directly while offering seamless escalation to live agents when necessary. These agents, equipped with the Customer Service Copilot, ensure swift and accurate resolutions. This initiative reflects Metrolinx’s commitment to social equity and customer-centricity, aligning with its broader mission to enhance operational efficiency and accessibility for commuters. 

Bringing employees, partners, and key stakeholders along on their AI journey, PRESTO continues to explore and adopt AI innovation as they look to what’s next to better serve commuters. Metrolinx is setting a benchmark for how government entities can embrace AI responsibly and effectively, while positioning the agency as a leader in innovative and commuter-centric public services.

Canadian Tire Corporation boosts employee productivity with Azure AI

Canadian Tire has turned to AI to address the significant challenges related to changing demands for digital solutions, processing vast amounts of data, and achieving operational efficiencies.

During the pandemic, Canadian Tire leveraged AI and Microsoft Teams to develop a robust curbside delivery system, streamlining order management, ensuring timely deliveries, and providing a safe and seamless customer experience. Automation also played a crucial role, reducing manual errors and improving operational efficiency by handling repetitive tasks. The curbside delivery system tripled online sales during the pandemic, demonstrating Canadian Tire’s ability to adapt to shifting consumer needs. Additionally, these innovations significantly improved customer satisfaction by offering a seamless digital experience.

Looking ahead, Canadian Tire is exploring the use of AI to further personalize customer experiences, optimize supply chain logistics, and predict market trends, and plans to expand its integration of cloud technologies and machine learning models to refine inventory management and drive sustainable retail practices, ensuring it remains a leader in digital transformation within the retail sector.

AI for everyone in Canada

Now is the time for Canada to unlock its AI potential. Generative AI has the potential to add $187 billion to the economy by 2030, presenting a unique opportunity for governments, industry, and civil society to collaborate and harness AI as a force for good—driving growth, creating jobs, and shaping a better future for all Canadians.

microsoft collaborates with organizations to skill canadians in generative ai


Read the blog

At Microsoft, we are committed to making this vision a reality. By investing in skills, infrastructure, and partnerships, we are working to ensure the AI economy benefits everyone. From helping Canadian organizations adopt AI responsibly to advancing solutions for real-world challenges, we are dedicated to driving meaningful transformation across industries.

Through purposeful collaboration, we can maximize AI’s potential to strengthen Canada’s economy and enhance society. The time to act is now. Let’s shape the future, together.

Find the resources to support your AI journey


1 KPMG, Generative AI adoption Index.

The post Seizing the AI opportunity: How to transform Canada’s economy by 2030 appeared first on Microsoft AI Blogs.

]]>
More value, less risk: How to implement generative AI across the organization securely and responsibly http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/11/04/more-value-less-risk-how-to-implement-generative-ai-across-the-organization-securely-and-responsibly/ Mon, 04 Nov 2024 16:00:00 +0000 The technology landscape is undergoing a massive transformation, and AI is at the center of this change.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on Microsoft AI Blogs.

]]>
The technology landscape is undergoing a massive transformation, and AI is at the center of this change—posing both new opportunities as well as new threats.  While AI can be used by adversaries to execute malicious activities, it also has the potential to be a game changer for organizations to help defeat cyberattacks at machine speed. Already today generative AI stands out as a transformative technology that can help boost innovation and efficiency. To maximize the advantages of generative AI, we need to strike a balance between addressing the potential risks and embracing innovation. In our recent strategy paper, “Minimize Risk and Reap the Benefits of AI,” we provide a comprehensive guide to navigating the challenges and opportunities of using generative AI.

background pattern

Minimize Risk and Reap the Benefits of AI

Addressing security concerns and implementing safeguards

According to a recent survey conducted by ISMG, the top concerns for both business executives and security leaders on using generative AI in their organization range, from data security and governance, transparency and accountability to regulatory compliance.1 In this paper, the first in a series on AI compliance, governance, and safety from the Microsoft Security team, we provide business and technical leaders with an overview of potential security risks when deploying generative AI, along with insights into recommended safeguards and approaches to adopt the technology responsibly and effectively.

Learn how to deploy generative AI securely and responsibly

In the paper, we explore five critical areas to help ensure the responsible and effective deployment of generative AI: data security, managing hallucinations and overreliance, addressing biases, legal and regulatory compliance, and defending against threat actors. Each section provides essential insights and practical strategies for navigating these challenges. 

An infographic displaying the top 5 security and business leader concerns: data security, hallucinations, threat actors, biases, and legal and regulatory

Data security

build a foundation for AI success


Explore governance

Data security is a top concern for business and cybersecurity leaders. Specific worries include data leakage, over-permissioned data, and improper internal sharing. Traditional methods like applying data permissions and lifecycle management can enhance security. 

Managing hallucinations and overreliance

Generative AI hallucinations can lead to inaccurate data and flawed decisions. We explore techniques to help ensure AI output accuracy and minimize overreliance risks, including grounding data on trusted sources and using AI red teaming. 

Defending against threat actors

Threat actors use AI for cyberattacks, making safeguards essential. We cover protecting against malicious model instructions, AI system jailbreaks, and AI-driven attacks, emphasizing authentication measures and insider risk programs. 

Grow Your Business with AI You Can Trust

Addressing biases

Reducing bias is crucial to help ensure fair AI use. We discuss methods to identify and mitigate biases from training data and generative systems, emphasizing the role of ethics committees and diversity practices.

Microsoft’s journey to redefine legal support with AI


All in on AI

Navigating AI regulations is challenging due to unclear guidelines and global disparities. We offer best practices for aligning AI initiatives with legal and ethical standards, including establishing ethics committees and leveraging frameworks like the NIST AI Risk Management Framework.

Explore concrete actions for the future

Explore security innovations


Microsoft at RSAC 2025

As your organization adopts generative AI, it’s critical to implement responsible AI principles—including fairness, reliability, safety, privacy, inclusiveness, transparency, and accountability. In this paper, we provide an effective approach that uses the “map, measure, and manage” framework as a guide; as well as explore the importance of experimentation, efficiency, and continuous improvement in your AI deployment.

I’m excited to launch this series on AI compliance, governance, and safety with a strategy paper on minimizing risk and enabling your organization to reap the benefits of generative AI. We hope this series serves as a guide to unlock the full potential of generative AI while ensuring security, compliance, and ethical use—and trust the guidance will empower your organization with the knowledge and tools needed to thrive in this new era for business.

Additional resources

Get more insights from Bret Arsenault on emerging security challenges from his Microsoft Security blogs covering topics like next generation built-in security, insider risk management, managing hybrid work, and more.


1, 2 ISMG’s First annual generative AI study – Business rewards vs. security risks: Research report, ISMG.

The post More value, less risk: How to implement generative AI across the organization securely and responsibly appeared first on Microsoft AI Blogs.

]]>
AI safety first: Protecting your business and empowering your people http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/31/ai-safety-first-protecting-your-business-and-empowering-your-people/ Thu, 31 Oct 2024 15:00:00 +0000 Microsoft has created some resources like the Be Cybersmart Kit to help organizations learn how to protect themselves.

The post AI safety first: Protecting your business and empowering your people appeared first on Microsoft AI Blogs.

]]>


Every technology can be used for good or bad. This was as true for fire and for writing as it is for search engines and for social networks, and it is very much true for AI. You can probably think of many ways that these latter two have helped and harmed in your own life—and you can probably think of the ways they’ve harmed more easily, because those stick out in our minds, while the countless ways they helped (finding your doctor, navigating to their office, the friends you made, the jobs you got) fade into the background of life. You’re not wrong to think this: when a technology is new it’s unfamiliar, and every aspect of it attracts our attention—how often do you get astounded by the existence of writing nowadays?—and when it doesn’t work, or gets misused, it attracts our attention a lot.

The job of the people who build technologies is to make them as good as possible at helping, and as bad as possible at harming. That’s what my job is: as CVP and Deputy CISO of AI Safety and Security at Microsoft, I have the rare privilege of leading a team whose job is to look at every aspect of every AI system we build, and figure out ways to make them safer and more effective. We use the word “safety” very intentionally, because our work isn’t just about security, or privacy, or abuse; our scope is simply “if it involves AI, and someone or something could get hurt.”

But the thing about tools is that no matter how safe you make them, they can go wrong and they can be misused, and if AI is going to be a major part of our lives—which it almost certainly is—then we all need to learn how to understand it, how to think about it, and how to keep ourselves safe both with and from it. So as part of Cybersecurity Awareness Month, we’ve created some resources like the Be Cybersmart Kit to help individuals and organizations learn about some of the most important risks and how to protect themselves.

Cybersecurity awareness

Explore cybersecurity awareness resources and training

I’d like to focus on the three risks that are most likely to affect you directly as individuals and organizations in the near future: overreliance, deepfakes, and manipulation. The most important lesson is that AI safety is about a lot more than how it’s built—it’s about the ways we use it.

Overreliance on AI

Microsoft at RSAC 2025


Explore Security innovations

Because my job has “security” in the title, when people ask me about the number one risk from AI they often expect me to talk about sophisticated cyberattacks. But the reality is that the number one way in which people get hurt by AI is by not knowing when (not) to trust it. If you were around in the late 1990s or early 2000s, you might remember a similar problem with search engines: people were worried that if people saw something on the Internet, all nicely written and formatted, they would assume whatever they read was true—and unfortunately, this worry was well-founded. This might seem ridiculous to us with twenty years of additional experience with the Internet; didn’t people know that the Internet was written by people? Had they ever met people? But at the time, very few people ever encountered professionally-formatted text with clean layouts that wasn’t the result of a lengthy editorial process; our instincts for what “looked reputable” were wrong. Today’s AI has a similar concern because it communicates with you, and we aren’t used to things that speak to us in natural language not understanding basic things about our lives.

We call this problem “overreliance,” and it comes in four basic shapes:

  • Naive overreliance happens when users simply don’t realize that just because responses from AI sound intelligent and well-reasoned, that doesn’t mean the responses actually are smart. They treat the AI like an expert instead of like a helpful, but sometimes naive, assistant.
  • Rushed overreliance happens when people know they need to check, but they just don’t have time to—maybe they’re in a fast-paced environment, or they have too many things to check one by one, or they’ve just gotten used to clicking “accept.”
  • Forced overreliance is what happens when users can’t check, even if they want to; think of an AI helping a non-programmer write a complex website (are you going to check the code for bugs?) or vision augmentation for the blind.
  • Motivated overreliance is maybe the sneakiest: it happens when users have an answer they want to get, and keep asking around (or rephrasing the question, or looking at different information) until they get it.

In each case, the problem with overreliance is that it undermines the human role in oversight, validation, and judgment, which is crucial in preventing AI mistakes from leading to negative outcomes.

How to stay safe

The most important thing you can do to protect yourself is to understand that AI systems aren’t the infallible computers of science fiction. The best way to think of them is as earnest, smart, junior colleagues—excited to help and sometimes really smart but sometimes also really dumb. In fact, this rule applies to a lot more than just overreliance: we’ve found that asking “how would I make this safe if it were a person instead of an AI?” is one of the most reliable ways to secure an AI system against a huge range of risks.

  1. Treat AI as a tool, not a decision-maker: Always verify the AI’s output, especially in critical areas. You wouldn’t hand a key task to a new hire and assume what they did is perfect; treat AI the same way. Whether it’s generating code or producing a report, review it carefully before relying on it.
  2. Maintain human oversight: Think of this as building a business process. If you’re going to be using an AI to help make decisions, who is going to cross-check that? Will someone be overseeing the results for compliance, maybe, or doing a final editorial pass? This is especially true in high-stakes or regulated environments where errors could have serious consequences.
  3. Use AI for brainstorming: AI is at its best when you ask it to lean into its creativity. It’s especially good at helping come up with ideas and interactively brainstorming. Don’t ask AI to do the job for you; ask AI to come up with an idea for your next step, think about it and maybe tweak it a bit, then ask it about its thoughts for what to do next. This way its creativity is boosting yours, while your eye is still on whether the result is what you want.

Train your team to know that AI can make mistakes. When people understand AI’s limitations, they’re less likely to trust it blindly.

Impersonation using AI

Fighting deepfakes with more transparency


Read more

Deepfakes are highly realistic images, recordings, and videos created by AI. They’re called “fakes” when they’re used for deceptive purposes—and both this threat and the next one are about deception. Impersonation is when someone uses a deepfake to convince you that you’re talking to someone that you aren’t. This threat can have serious implications for businesses, as bad actors can use deepfake technology to deceive others into making decisions based on fraudulent information.

Imagine someone creates a deepfake of your chief finance officer’s voice and uses it to convince an employee to authorize a fraudulent transfer. This isn’t hypothetical—it already happened. A company in Hong Kong was taken for $25.6 million with the use of this exact technique.1

The real danger lies in how convincingly these AI-generated voices and videos can mimic trusted individuals, making it hard to know who you’re talking to. Traditional methods of identifying people—like hearing their voice on the phone or seeing them on a video call—are no longer reliable.

How to stay safe

As deepfakes become more compelling, the best defense is to communicate with people in ways where recognizing their face or voice isn’t the only thing you’re relying on. That means using authenticated communication channels like Microsoft Teams or email rather than phone calls or SMS, which are trivial to fake. Within those channels, you need to check that you’re talking to the person you think you’re talking to, and that software (if built right) can help you do that.

In the Hong Kong example above, the bad actor sent an email from a fake but realistic-looking email address inviting the victim to a Zoom meeting on an attacker-controlled but realistically-named server, where they had a conversation with “coworkers” who were actually all deepfakes. Email services such as Outlook can prevent situations like this by vividly highlighting that this is a message from an unfamiliar email address and one that isn’t part of your company; enterprise video conferencing (VC) systems like Teams can identify that you’re connecting to a system outside your own company as a guest. Use tools that provide indicators like these and pay attention to them.

If you find that you need to talk over an unauthenticated channel—say, you get a phone call from a family member in a bad situation and desperately needing you to send them money, or you get a WhatsApp message from an unfamiliar number—consider pre-arranging some secret code words with people you know so you can identify that they’re really who they say they are.

All of these are examples of a familiar technique that we use in security called multi-factor authentication (MFA), which is about using multiple means to verify someone is who they say they are. If you communicate over an authenticated channel, an attacker has to both compromise an account on your service (which itself should be protected by multiple factors) and create a convincing deepfake of that particular person. Forcing attackers to simultaneously do multiple different attacks against the same target at once makes the job exponentially harder for them. Most important services you use (email, social networks, and so on) allow you to set up MFA, and you should always do this when you can—preferably using “strong” MFA methods like physical keys or mobile apps, rather than weak methods like SMS, which are easily faked. According to our latest Microsoft Digital Defense Report, implementing modern day MFA reduces the likelihood of account compromise by 99.2%, significantly strengthening security and making unauthorized access more difficult for attackers to gain access. Although MFA techniques reduce the risk of identity compromise, many organization have been slow to adopt them. So, in January 2020, Microsoft introduced “security defaults” that turn on MFA while turning off basic and legacy authentication for new tenants and those with simple environments. The impact is clear: tenants that use security defaults experience 80% fewer compromises than tenants that don’t.

Scams, phishing, and social manipulation

What is phishing?


Learn more

Beyond impersonating someone you know, AI can be used to power a whole range of attacks against people. The most expensive part of running a scam is taking the victim from the moment they first pick up the bait—answering an email message, perhaps—to the moment the scammers get what they want, be it your password or your money. Phishing campaigns often require work to create cloned websites to steal your credentials. Spear-phishing requires crafting a targeted set of lures for each potential victim. All of these are things that bad actors can do much more quickly and easily with AI tools to help them; they are, after all, the same tools that good actors use to automate customer service, website building, or document creation.

On top of scams, an increasingly important use of AI is in social manipulation, especially by actors with political goals—whether they be real advocacy organizations or foreign intelligence services. Since the mid-2010s, a key goal of many governments has been to sow confusion in the information world in order to sway political outcomes. This can include:

  • Convincing you that something is true when it isn’t—maybe that some kind of crime is rampant and you need to be protected from it, or that your political enemies have been doing something awful.
  • Convincing you that something isn’t true when it is—maybe that the bad things they were caught doing are actually deepfakes and frauds.
  • Simply convincing you that you can’t know what’s true, and you can’t do anything about it anyway, so you should just give up and stay home and not try to affect things.

There are a lot of tricks to doing this, but the most important ones are to make it feel like “everybody feels” something (by making sure you see just enough comments saying something that you figure it must be right, and you start repeating them, making other people believe it even more) and by telling you what you want to hear—creating false stories that line up with what you’re already expecting to believe. (Remember motivated overreliance? This is the same thing!)

AI is supercharging this space as well; it used to be that if you wanted to make sure that every hot conversation about a subject had people voicing your opinion, you needed either very non-human-sounding scripts, or you needed to hire a room full of operators. Today, all you need is a computer.

You can learn more about these attacks in on our threat intelligence website called Microsoft Security Insider.

How to stay safe

Take your current habits for being aware of potential scams or phishing attempts, and turn them up a notch. Just because something showed up at the top of search results doesn’t mean it’s legitimate. Look at things like URLs and source email addresses carefully, and see if you’re looking at something genuine or not.

To detect sophisticated phishing attempts, always verify both the source and the information with trusted channels. Cybercriminals often create a false sense of urgency, use amplification tactics, and mimic trustworthy sources to make their emails or content appear legitimate. Stay especially cautious when approached by unfamiliar individuals online, as most fraud or influence operations begin with a simple social media reply or a seemingly innocent “wrong number” message. (More sophisticated attacks will send friend requests to people, and once you get one person to say yes, your further requests to their friends will look more legitimate, since they now have mutual “friends” with the attacker.)

Social manipulation can affect you both directly (you see messages created by a threat actor) or indirectly (your friends saw those messages and unwittingly repeated them). This means that just because you hear something from someone you trust, you can’t be sure they didn’t get fooled too. If you’re forming your opinion about something, or if you need to make an important decision about whether you believe something or not, do some research, and figure out where a story came from. (And don’t forget that “they won’t tell you about this!” is a common thing to add to frauds, just to make you believe that the lack of news coverage makes it more true.)

But on the other hand, don’t refuse to believe anything you hear, because making you not believe true things is another way you can be cheated. Too much skepticism can get you in just as much trouble as not enough.

And ultimately, remember—social media and similar fora are designed to get you more engaged, activated, and excited, and when you’re in that state, you’re more likely to amplify any feelings you encounter. Often the best thing you can do is simply disconnect for a while and take a breather.

The power and limitations of AI

While AI is a powerful tool, its safety and effectiveness rely on more than just the technology itself. AI functions as one part of a larger, interconnected system that includes human oversight, business processes, and societal context. Navigating the risks—whether it’s overreliance, impersonation, cyberattacks, and social manipulation—requires not only understanding AI’s role but also the actions people must take to stay safe. As AI continues to evolve, staying safe means remaining active participants—adapting, learning, and taking intentional steps to protect both the technology and ourselves. We encourage you to use the resources on the cybersecurity awareness page and help educate your organization so as to create a security-first culture and secure our world—together.

Learn more about AI safety and security


1Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’, CNN, 2024.

The post AI safety first: Protecting your business and empowering your people appeared first on Microsoft AI Blogs.

]]>
3 ways social impact organizations can leverage AI to transform outcomes at scale http://approjects.co.za/?big=en-us/microsoft-cloud/blog/2024/10/07/3-ways-social-impact-organizations-can-leverage-ai-to-transform-outcomes-at-scale/ Mon, 07 Oct 2024 15:00:00 +0000 We are supporting nonprofits through technology, and particularly by leveraging Azure AI, to deepen their impact in three significant ways.

The post 3 ways social impact organizations can leverage AI to transform outcomes at scale appeared first on Microsoft AI Blogs.

]]>
Nonprofits are building creative solutions in Microsoft AI to address some of the world’s most entrenched challenges. This evolving technology is helping unlock social impact organizations’ capacity to do good at scale, securely. 

Armed conflict, economic uncertainty, climate change, and countless other pressures contribute to global headwinds gusting against progress. Thousands of nonprofits and other social impact organizations are bringing the skills, passion, and boots-on-the-ground effort to meet these challenges directly. Their dedication improves the lives of people across the world. 

Mission-driven organizations face difficulties of their own, though. Changes in demographics, global economies, and geopolitics lead to rising demand for their services. Nonprofits have always operated with limited resources, but today’s economic climate makes fundraising even tougher. Increasingly sophisticated threats to democracy and cybersecurity make their work more needed and more difficult at the same time. 

Microsoft for Nonprofits

Empower your nonprofit with AI

Microsoft philanthropies


Read more stories

This is where AI can help—to enable nonprofits and other social impact organizations to do more good with less. AI, through Microsoft’s purpose-driven technology, can unlock the capacity of these vital organizations worldwide. As the sector increasingly adopts AI, we see more examples of its potential to accelerate societal impact.  

We are supporting nonprofits through technology, and particularly by leveraging Azure AI, to deepen their impact in three significant ways. AI is helping them protect and expand critical services, meet the needs of shifting demographics in the Global South, and partner across sectors to drive humanitarian progress. 

1. Securing and expanding critical services 

With roughly 8 billion people sharing the earth’s limited resources, and with too many people living on not enough, it’s important to steward critical supplies and services. From healthcare to clean water, these resources are foundational to well-being and the pursuit of fundamental rights. 

AI is enabling social impact organizations to reliably and securely scale these essential services. For example, while the Kenyan Red Cross offers mental health support in person and through its 24-hour phone line, this vital care remains out of reach for many people. The Kenyan Red Cross worked with psychologists and counselors, AI experts, people with lived experience of mental health conditions, and others to create an Azure AI-powered chatbot to expand its free mental health outreach.

The chatbot, which is in its beta release and is embedded in the organization’s website, prompts conversations about mental wellbeing, recommends helpful practices, and offers to connect users to human counselors and in-person resources such as humanitarian organizations or clinics. Kelvin Njenga, Digital Transformation Officer at Kenya Red Cross, adds, “In Kenya, there is a lot of stigma around getting mental health support. Leveraging AI in the chatbot provides that support, confidentially.”

This use of AI does not attempt to replace human connection. Rather, it complements person-to-person support and broadens the Kenyan Red Cross’s capacity to reach even more people with the mental health care they deserve. About one billion people worldwide live with a medical condition, and technology-enabled solutions like this chatbot can help overcome barriers to crucial services.1 

2. Delivering benefits for the most vulnerable and hard to reach people 

AI is enabling organizations to reach more people in some of the most remote areas of the world. Through better use of data and insights, AI solutions can lead to more informed decision-making and more efficient development programs that can change lives.

The International Fund for Agricultural Development (IFAD), a specialized agency of the United Nations and an International Financial Institution that invests in the world’s poorest people, has built an internal analytics platform with Microsoft Power Platform, Microsoft Azure—including Azure OpenAI Service and Azure Machine Learning—and other data and AI solutions to turn its information into insights and then action. 

IFAD developed the platform in compliance with the United Nations Principles on the Ethical Use of AI. The solution combines data, dashboards, and visualizations from diverse sources across IFAD, enabling staff around the world to connect and contribute to this wealth of information. IFAD anticipates the AI-enabled platform will help them develop and implement ever-more impactful interventions which benefit small-scale food producers and other rural people.

AI and machine learning can combine and analyze vast amounts of information at a pace and scale impossible for humans to achieve on their own. Empowered by the most complete information possible, leaders of social impact organizations can move the needle farther on the world’s most pressing challenges.

3. Partnering to empower the social impact ecosystem 

The problems our planet faces are too vast and complex for any one organization to solve. We must all work together to innovate solutions that make life better for everyone. By utilizing the expertise and lived experience of a diversity of stakeholders, AI solutions can make more of a difference than any single organization or agency could do alone. 

That is precisely the approach that one coalition is taking to tackle malnutrition in Kenya. A cross-sector collaboration between Amref Health Africa, the Kenyan Ministry of Health, the University of Southern California, and Microsoft is developing a model in Azure to predict and prevent malnutrition. 

The model combines a decade’s worth of detailed healthcare information, collected by the Kenyan Ministry of Health, with other inputs, such as satellite imagery and weather data. Machine learning-powered modeling will help Amref, Kenyan health agencies, and partner humanitarian organizations better understand current nutrition within communities and anticipate future problems. This forecasting will enable them to mobilize health workers and deploy resources to halt malnutrition, explains Dr. Shiphrah Kuria, Amref Regional Manager for Reproductive, Maternal, and Child Health.

“This technology puts us ahead because with better planning and better prevention, we are getting closer to our goals of ending malnutrition.”

Dr. Shiphrah Kuria, Amref Regional Manager for Reproductive, Maternal, and Child Health

We at Microsoft are not only providing the technology that enables nonprofits to build and utilize these Azure AI-based solutions. We are also investing deeply in the infrastructure and resources needed to run AI at an unprecedented scale. That way, we help bring the power of AI to social impact organizations everywhere—and transform the world for the better.   

Explore AI solutions for nonprofits

Learn more about how Microsoft is supporting nonprofits, see how other organizations are using AI to drive impact, and get more information about how you can safely and securely deploy AI to support your business needs.  


1World Health Organization Fact Sheet, 2022.

The post 3 ways social impact organizations can leverage AI to transform outcomes at scale appeared first on Microsoft AI Blogs.

]]>