More value, less risk: How to implement generative AI across the organization securely and responsibly
The technology landscape is undergoing a massive transformation, and AI is at the center of this change.
We’re committed to making sure AI systems are developed responsibly and in ways that warrant people’s trust.
The technology landscape is undergoing a massive transformation, and AI is at the center of this change.
Microsoft has created some resources like the Be Cybersmart Kit to help organizations learn how to protect themselves.
We are supporting nonprofits through technology, and particularly by leveraging Azure AI, to deepen their impact in three significant ways.
Supporting a more trustworthy information ecosystem with responsible AI tools and practices is just one way Microsoft is fighting harmful deepfakes.
Bigger is not always necessary in the rapidly evolving world of AI, and that is true in the case of small language models (SLMs).
At Microsoft, we have commitments to ensuring Trustworthy AI and are building industry-leading supporting technology. Our commitments and capabilities go hand in hand to make sure our customers and developers are protected at every layer.
This new approach to measurement, or defining and assessing risks in AI and ensuring solutions are effective, looks at both social and technical elements of how the generative technology interacts with people.
Around the time GPT-4 was making headlines for acing standardized tests, Microsoft researchers and collaborators were putting other AI models through a different type of test—one designed to make the models fabricate information.
Today, we’re excited to share Global Governance: Goals and Lessons for AI, a collection of external perspectives on international institutions from different domains, brought together with our own thoughts on goals and frameworks for global AI governance.
In this inaugural annual report, we provide insight into how we build applications that use generative AI; make decisions and oversee the deployment of those applications; support our customers as they build their own generative applications; and learn, evolve, and grow as a responsible AI community.
We have collected a set of resources that encompass best practices for AI governance, focusing on security, privacy and data governance, and responsible AI.
Our approach to Responsible AI is built on a foundation of privacy, and we remain dedicated to upholding core values of privacy, security, and safety in all our generative AI products and solutions.