We've identified six principles that we believe should guide AI development and use.
Fairness
AI systems should treat all people fairly. How might an AI system allocate opportunities, resources, or information in ways that are fair to the humans who use it?
Reliability and safety
AI systems should perform reliably and safely. How might the system function well for people across different use conditions and contexts, including ones it was not originally intended for?
Privacy and security
AI systems should be secure and respect privacy. How might the system be designed to support privacy and security?
Inclusiveness
AI systems should empower everyone and engage all people, regardless of their background. How might the system be designed to be inclusive of people of all abilities?
Transparency
AI systems should be understandable. How can we ensure people correctly understand that capabilities of the system?
Accountability
People should be accountable for AI systems. How can we create oversight so that humans can be accountable and in control?
Explore how Microsoft empowers employees across the organization to be champions of responsible AI.
We set rules for enacting responsible AI and clearly define roles and responsibilities for teams involved.
We foster readiness to adopt responsible AI practices, both within our company and with our customers and partners.
We review sensitive use cases to ensure we are upholding our responsible AI principles.
We work to shape new laws and standards to help ensure that the promise of AI is realized for society at large.
Researchers in Aether, Microsoft Research, and our engineering teams keep our RAI program on the leading edge.
Teams conduct rigorous AI research, including on transparency, fairness, human-AI collaboration, privacy, security, safety and the impact of AI on people and society.
Our researchers actively participate in broader discussions and debates to ensure that our responsible AI program integrates big-picture perspectives and input.
Engineering teams define and operationalize a tooling and system strategy for using AI responsibly.
Engineering leaders identify and implement engineering practices that integrate responsible AI into everyday work.
Engineering teams implement compliance tooling to help monitor and enforce responsible AI rules and requirements.
AI principles are guidelines designed to ensure the responsible development and deployment of artificial intelligence technologies. These principles are crucial because they help mitigate risks, promote ethical practices, and maximize the benefits of AI for society.
The Responsible AI Standard at Microsoft consolidates essential practices to ensure compliance with emerging AI laws and regulations.
Microsoft offers a range of tools and practices to help organizations practice responsible AI.
Additionally, the Responsible AI Standard at Microsoft helps define product development requirements for responsible AI.
Transparency notes are created to help customers better understand the inner workings of AI technologies and make more informed decisions about their use. They are part of the Responsible AI Standard and are intended to support responsible AI development by providing insights into how AI systems are governed, mapped, measured, and managed.
Microsoft also offers the Responsible AI Transparency Report, which provides insights into how Microsoft builds applications with generative AI, oversees the deployment of those applications, supports customers as they build their own AI applications, and fosters a responsible AI community.
Follow Microsoft