Developing safe, secure, and trustworthy AI

In this inaugural Responsible AI Transparency Report, we provide insight into how we build applications that use generative AI, make decisions and oversee the deployment of those applications, support our customers as they build their own generative applications, and learn, evolve, and grow as a responsible AI community.


Deployment safety for generative AI applications

Safely deploying Copilot Studio

Copilot Studio harnesses generative AI to enable customers without programming or AI skills to build copilots. As with all generative AI systems, the Copilot Studio engineering team mapped, measured, and managed risks according to our governance framework to ensure safety prior to deployment.

Read about Copilot Studio

Safely deploying GitHub Copilot

GitHub Copilot is an AI-powered tool designed to increase developer productivity. In developing the features for GitHub Copilot, the team worked with their Responsible AI Champions to map, measure, and manage risks associated with using generative AI in the context of coding.

Read about GitHub Copilot

Sensitive Uses program

Read about how one of our products, Copilot for Security, mapped, measured, and managed risks with guidance from the Sensitive Uses team.

Read more about Sensitive Uses

AI Customer Commitments

In June 2023, we announced our AI Customer Commitments, outlining steps to support our customers on their responsible AI journey.

Explore AI Customer Commitments

Tools to support responsible development

We’ve released 30 responsible AI tools that include more than 100 features to support customers’ responsible AI development. These tools work to map and measure AI risks and manage identified risks with novel mitigations, real-time detection and filtering, and ongoing monitoring.

Learn more about our RAI tools

Transparency to support responsible development and use

We provide documentation to our customers about our AI applications’ capabilities, limitations, intended uses and more.

Learn more about AI transparency

Governance of responsible AI

At Microsoft, no one team or organization can be solely responsible for embracing and enforcing the adoption of responsible AI practices.

Learn about our RAI community

External partnerships

We partner with governments, civil society organizations, academics, and others to advance responsible AI.

Supporting AI research

Academic research and development can help realize the potential of AI. We’ve committed support to various programs and regularly publish research to advance the state of the art in responsible AI.

Tuning in to global perspectives

In 2023, we worked with more than 50 internal and external groups to better understand how AI innovation may impact regulators and individuals in developing countries.

Learn about our initiative

Explore Responsible AI at Microsoft

Earn trust

We’re committed to advancing cybersecurity and digital safety, leading the responsible use of AI, and protecting privacy.

Learn how we earn trust

Responsible AI

We are committed to the advancement of AI driven by ethical principles.

Learn about Responsible AI