Fighting deepfakes with more transparency about AI
Supporting a more trustworthy information ecosystem with responsible AI tools and practices is just one way Microsoft is fighting harmful deepfakes.
Supporting a more trustworthy information ecosystem with responsible AI tools and practices is just one way Microsoft is fighting harmful deepfakes.
In this blog, I’d like to share a few examples of how we’re bringing promising efficiency research out of the lab and into commercial operations.
Bigger is not always necessary in the rapidly evolving world of AI, and that is true in the case of small language models (SLMs).
The Copilot Learning Hub caters to technical audiences and ensures that each learner accesses content that is tailored to their learning goals.
Read some examples of how we’re advancing the power and energy efficiency of AI.
This new approach to measurement, or defining and assessing risks in AI and ensuring solutions are effective, looks at both social and technical elements of how the generative technology interacts with people.
Our customer service teams are using AI solutions like Microsoft Copilot to focus on the most meaningful parts of their jobs.
Generative AI is opening up all sorts of new avenues for learning, from personalized tutoring to study guides. But as with any technology, it’s helpful to know its strengths and limitations before diving in.
Just as AI tools such as ChatGPT and Copilot have transformed the way people work in all sorts of roles around the globe, they’ve also reshaped so-called red teams—groups of cybersecurity experts whose job is to think like hackers to help keep technology safe and secure.
We are sharing more about two focus areas for continuing to drive down water intensity.
A skills-first approach to AI leads to skill building for every role, with Microsoft Learn as a trusted partner.
Our industry-specific solutions enable businesses to adopt and integrate AI technologies swiftly and efficiently.