{"id":2453,"date":"2025-05-20T17:37:45","date_gmt":"2025-05-20T17:37:45","guid":{"rendered":"https:\/\/dmcwpprod.wpenginepowered.com\/?page_id=1156"},"modified":"2025-10-16T10:46:20","modified_gmt":"2025-10-16T10:46:20","slug":"responsible-ai-transparency-report","status":"publish","type":"page","link":"https:\/\/www.microsoft.com\/dmc\/en-us\/corporate-responsibility\/responsible-ai-transparency-report\/","title":{"rendered":"Responsible AI Transparency Report"},"content":{"rendered":"
[vc_row css=”.vc_custom_1749652948855{padding-top: 300px !important;padding-right: 5% !important;padding-left: 5% !important;background-image: url(https:\/\/dmcwpprod.wpenginepowered.com\/wp-content\/uploads\/2025\/06\/BANNER_IMAGE_sized.png?id=1651) !important;background-position: center !important;background-repeat: no-repeat !important;background-size: cover !important;}” el_id=”microsoft”][vc_column width=”1\/5″ offset=”vc_hidden-md vc_hidden-sm vc_hidden-xs”][vc_column_inner width=”1\/5″][\/vc_column_inner][\/vc_column][vc_column width=”3\/5″][vc_column_text css=””]<\/p>\n
[\/vc_column_text][vc_column_text css=””]Share<\/span> Our second annual Responsible AI Transparency Report covers the progress we\u2019ve made since the publication of our inaugural report in 2024. It highlights our continued commitment to responsible innovation, covering how we develop and deploy AI models and systems responsibly; how we support our customers; and how we learn, evolve, and grow.[\/vc_column_text][vc_row_inner][vc_column_inner][vc_btn title=”View the 2025 report” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D1|target:_blank” el_class=”btn-banner-report”][\/vc_column_inner][\/vc_row_inner][\/vc_column][vc_column width=”1\/5″ offset=”vc_hidden-md vc_hidden-sm vc_hidden-xs”][\/vc_column][\/vc_row][vc_row css=”.vc_custom_1747832219483{padding-right: 5% !important;padding-left: 5% !important;background-color: #FFFFFF00 !important;background-position: center !important;background-repeat: no-repeat !important;background-size: cover !important;}” el_class=”video”][vc_column][vc_row_inner equal_height=”yes”][vc_column_inner width=”1\/3″][vc_column_text css=””]<\/p>\n In 2024, we made key investments in our responsible AI tools, policies, and practices to move at the speed of AI innovation.[\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750158632139{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We improved our responsible AI tooling to expand coverage for risk evaluation and mitigations across modalities as well as for agentic systems.[\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750158642398{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We took a proactive, layered approach to compliance with new regulatory requirements.<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner equal_height=”yes”][vc_column_inner width=”1\/3″][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750158755338{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We launched an internal workflow tool to centralize responsible AI requirements and simplify documentation for pre-deployment reviews.[\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750158764402{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We continued to provide hands-on counseling for high-impact and higher-risk uses of AI, particularly in areas related to healthcare and the sciences.<\/span><\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner equal_height=”yes”][vc_column_inner width=”1\/3″][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750158773039{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We established the AI Frontiers lab to push the frontier of AI capabilities, efficiency, and safety<\/span><\/span>.<\/p>\n [\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750158785097{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We collaborated with stakeholders around the world to make progress towards building coherent governance frameworks.<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row css=”.vc_custom_1749636436990{padding-right: 5% !important;padding-left: 1% !important;background-color: #FFFFFF00 !important;background-position: center !important;background-repeat: no-repeat !important;background-size: contain !important;}” el_id=”build”][vc_column width=”1\/5″ offset=”vc_hidden-md vc_hidden-sm vc_hidden-xs”][vc_row_inner][vc_column_inner el_id=”newfloatingmenu”][vc_column_text css=”.vc_custom_1760611503546{margin-right: 10px !important;margin-bottom: 0px !important;margin-left: 10px !important;padding-top: 1px !important;padding-right: 10px !important;padding-bottom: 15px !important;padding-left: 10px !important;}” el_class=”floatingmenusblackmain2″]<\/p>\n Responsible AI transparency<\/span><\/a><\/p>\n [\/vc_column_text][vc_column_text css=”.vc_custom_1760611525090{margin-right: 10px !important;margin-bottom: 0px !important;margin-left: 10px !important;padding-top: 1px !important;padding-right: 10px !important;padding-bottom: 15px !important;padding-left: 10px !important;background-color: #FFFFFF00 !important;}” el_class=”floatingmenusblackbluetop”]<\/p>\n Build<\/span><\/a><\/p>\n How [\/vc_column_text][vc_column_text css=”.vc_custom_1760611536353{margin-right: 10px !important;margin-bottom: 0px !important;margin-left: 10px !important;padding-top: 1px !important;padding-right: 10px !important;padding-bottom: 15px !important;padding-left: 10px !important;background-color: #FFFFFF00 !important;}” el_class=”floatingmenusblacktealtop”]<\/p>\n Decide<\/span><\/a><\/p>\n How [\/vc_column_text][vc_column_text css=”.vc_custom_1760611548013{margin-right: 10px !important;margin-bottom: 0px !important;margin-left: 10px !important;padding-top: 1px !important;padding-right: 10px !important;padding-bottom: 15px !important;padding-left: 10px !important;background-color: #FFFFFF00 !important;}” el_class=”floatingmenusblackwaste”]<\/p>\n Support<\/span><\/a><\/p>\n How [\/vc_column_text][vc_column_text css=”.vc_custom_1760611573465{margin-right: 10px !important;margin-bottom: 0px !important;margin-left: 10px !important;padding-top: 1px !important;padding-right: 10px !important;padding-bottom: 15px !important;padding-left: 10px !important;background-color: #FFFFFF00 !important;}” el_class=”floatingmenusblackgrey”]<\/p>\n Learn<\/span><\/a><\/p>\n How [\/vc_column_text][vc_column_text css=”.vc_custom_1760024070362{margin-right: 10px !important;margin-bottom: 0px !important;margin-left: 10px !important;padding-top: 1px !important;padding-right: 10px !important;padding-bottom: 15px !important;padding-left: 10px !important;}” el_class=”floatingmenusblackmain2″]<\/p>\n View the 2025 report<\/i><\/span><\/a><\/p>\n [\/vc_column_text][vc_column_text css=”.vc_custom_1750331223384{margin-right: 10px !important;margin-bottom: 0px !important;margin-left: 10px !important;padding-top: 1px !important;padding-right: 10px !important;padding-bottom: 15px !important;padding-left: 10px !important;}”]Follow <\/span> Responsible AI transparency |<\/span>Build<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1750073646756{padding-top: 15px !important;padding-bottom: 15px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner content_placement=”top”][vc_column_inner width=”1\/3″][vc_btn title=”Learn how we build” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D5|target:_blank” el_class=”btn-build-main”][\/vc_column_inner][vc_column_inner width=”2\/3″][vc_column_text css=””]<\/p>\n [\/vc_column_text][vc_column_text css=””]When we embark on the development and deployment of a new AI system, we enlist the AI Risk Management Framework created by the National Institute for Standards and Technology (NIST), which includes four key functions: govern, map, measure, and manage.[\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1749246224999{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner content_placement=”middle”][vc_column_inner width=”1\/2″][vc_column_text css=”.vc_custom_1750253924974{padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #1871C3 !important;}”]Govern:<\/strong><\/span><\/p>\n Our responsible AI governance architecture helps us uphold our principles consistently across the company. It involves establishing clear policies, processes, roles, and responsibilities.<\/span>[\/vc_column_text][vc_column_text css=”.vc_custom_1749653733369{padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #2673BA !important;}”]Map:<\/strong><\/span> AI risk measurement helps inform the prioritization and design of mitigations\u2014a practice that grew in importance in 2024 as AI capabilities became more complex.<\/span>[\/vc_column_text][vc_column_text css=”.vc_custom_1749656222159{padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #2D3A55 !important;}”]Manage:<\/strong><\/span><\/p>\n Once we\u2019ve mapped and measured risks, we manage them across the AI technology stack through a “defense in depth” approach. After deployment, we continue to manage risks through ongoing monitoring.<\/span><\/p>\n [\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/2″][vc_single_image image=”1647″ img_size=”full” alignment=”center” css=””][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner el_class=”accordian” css=”.vc_custom_1746127415870{margin-top: 40px !important;margin-right: 2% !important;margin-bottom: 40px !important;margin-left: 2% !important;padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #FFFFFF26 !important;}”][vc_column_inner width=”1\/5″][vc_single_image image=”1662″ img_size=”full” css=””][\/vc_column_inner][vc_column_inner width=”4\/5″][vc_column_text css=””]Case study<\/p>\n [\/vc_column_text][vc_column_text css=””]<\/p>\n In 2024, more people voted in elections across the world than at any other time in history. Microsoft took proactive measures in partnership with governments, nonprofit organizations, and private sector companies to prevent the creation and dissemination of deceptive AI-generated election content.<\/p>\n [\/vc_column_text][vc_btn title=”Learn more” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D12|target:_blank” el_class=”btn-manage-ai”][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row css=”.vc_custom_1749642638891{padding-right: 5% !important;padding-left: 5% !important;background-color: #FFFFFF00 !important;}” el_id=”make”][vc_column width=”1\/5″ offset=”vc_hidden-md vc_hidden-sm vc_hidden-xs”][\/vc_column][vc_column width=”4\/5″][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\n Responsible AI transparency |<\/span>Decide<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1750073668083{padding-top: 15px !important;padding-bottom: 15px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner content_placement=”top”][vc_column_inner width=”1\/3″][vc_btn title=”Learn how we make decisions” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D15|target:_blank” el_class=”btn-decide-decision”][\/vc_column_inner][vc_column_inner width=”2\/3″][vc_column_text css=””]<\/p>\n [\/vc_column_text][vc_column_text css=””]Throughout 2024, we continued to refine our pre-deployment oversight processes which include our deployment safety process for generative AI systems and models, as well as the Sensitive Uses and Emerging Technology program. We also launched an internal workflow tool to further support responsible AI documentation and review processes.[\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1749246224999{padding-top: 25px !important;padding-bottom: 25px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1746626336056{margin-bottom: -50px !important;}”][vc_column_inner width=”1\/2″][vc_column_text css_animation=”none” css=”.vc_custom_1750347026017{padding-top: 35px !important;}” el_class=”border”]<\/p>\n Before deploying our generative AI applications and models, teams review their risk management approach with experts across the Responsible AI Community. These experts provide recommendations and requirements grounded in our responsible AI policies.<\/p>\n <\/p>\n Learn more<\/span> [\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/2″][vc_column_text css_animation=”none” css=”.vc_custom_1750347081965{padding-top: 35px !important;}” el_class=”border”]<\/p>\n Our Sensitive Uses and Emerging Technologies program provides pre-deployment review and oversight of high-impact and higher-risk uses of AI. Reviews often culminate in requirements that go beyond our Responsible AI Standard.<\/p>\n <\/p>\n Learn more<\/span> [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1749212052786{background-color: #FFFFFF00 !important;}”][vc_column_inner][vc_column_text css=”.vc_custom_1749835461417{padding-top: 40px !important;padding-right: 40px !important;padding-bottom: 40px !important;padding-left: 40px !important;background-color: #2D231E !important;}”]<\/p>\n In 2024, 77% of cases that received consultations from the Sensitive Uses and Emerging Technologies team were related to generative AI.<\/span><\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner el_class=”accordian” css=”.vc_custom_1746127415870{margin-top: 40px !important;margin-right: 2% !important;margin-bottom: 40px !important;margin-left: 2% !important;padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #FFFFFF26 !important;}”][vc_column_inner width=”1\/5″][vc_single_image image=”1674″ img_size=”full” css=””][\/vc_column_inner][vc_column_inner width=”4\/5″][vc_column_text css=””]Case study<\/p>\n [\/vc_column_text][vc_column_text css=””]<\/p>\n The Phi model team released three collections of Phi models in 2024 and early 2025, each unlocking new capabilities. The team used a “break-fix” framework to inform deployment safety for each release.[\/vc_column_text][vc_btn title=”Learn more” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D16|target:_blank” el_class=”btn-decide-safely-section”][\/vc_column_inner][\/vc_row_inner][vc_row_inner el_class=”accordian” css=”.vc_custom_1746127415870{margin-top: 40px !important;margin-right: 2% !important;margin-bottom: 40px !important;margin-left: 2% !important;padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #FFFFFF26 !important;}”][vc_column_inner width=”1\/5″][vc_single_image image=”1664″ img_size=”full” css=””][\/vc_column_inner][vc_column_inner width=”4\/5″][vc_column_text css=””]Case study<\/p>\n [\/vc_column_text][vc_column_text css=””]<\/p>\n Smart Impression is an AI-powered productivity tool for radiologists. Through the Sensitive Uses review process, the product team identified and mitigated key risks related to using AI in a healthcare setting.[\/vc_column_text][vc_btn title=”Learn more” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D20|target:_blank” el_class=”btn-decide-safely-second-section”][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row css=”.vc_custom_1749642651362{padding-right: 5% !important;padding-left: 5% !important;background-color: #FFFFFF00 !important;}” el_id=”support”][vc_column width=”1\/5″ offset=”vc_hidden-md vc_hidden-sm vc_hidden-xs”][\/vc_column][vc_column width=”4\/5″][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\n Responsible AI transparency |<\/span>Support<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1747653666039{padding-top: 15px !important;padding-bottom: 15px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner content_placement=”top”][vc_column_inner width=”1\/3″][vc_btn title=”Learn how we support our customers” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D21|target:_blank” el_class=”btn-support-main”][\/vc_column_inner][vc_column_inner width=”2\/3″][vc_column_text css=””]<\/p>\n [\/vc_column_text][vc_column_text css=””]As developers and deployers of AI technology, it\u2019s our responsibility to support our customers in their own responsible AI journeys. We regularly share our tools and practices with our customers and eagerly engage in dialogue to learn how we can better support them in innovating responsibly.[\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1749650078198{padding-top: 50px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1746626336056{margin-bottom: -50px !important;}”][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750348482015{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We continue to expand and build on the AI Customer Commitments we first announced in 2023. In 2024, we extended our Customer Copyright Commitments to include our reseller partners.<\/p>\n <\/p>\n Learn more<\/span> [\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750348501131{padding-top: 35px !important;}” el_class=”border”]<\/p>\n Responsible AI tooling is critical to achieving consistent alignment with our internal AI policies. We\u2019ve released 30 responsible AI tools that include more than 155 features to support our customers\u2019 responsible AI development.<\/p>\n <\/p>\n Learn more<\/span> [\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750348512514{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We\u2019re committed to equipping our customers with the information they need to innovate responsibly. Since 2019, we\u2019ve published 40 Transparency Notes containing key information about our platform services.<\/p>\n <\/p>\n Learn more<\/span> [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1750153803770{padding-top: 10px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner el_class=”accordian” css=”.vc_custom_1746127415870{margin-top: 40px !important;margin-right: 2% !important;margin-bottom: 40px !important;margin-left: 2% !important;padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #FFFFFF26 !important;}”][vc_column_inner width=”1\/5″][vc_single_image image=”1665″ img_size=”full” css=””][\/vc_column_inner][vc_column_inner width=”4\/5″][vc_column_text css=””]Case study<\/p>\n [\/vc_column_text][vc_column_text css=””]<\/p>\n Microsoft-owned platform LinkedIn became the first professional networking platform to display the C2PA Content Credentials for all AI-generated images and videos uploaded to LinkedIn\u2019s feed.<\/p>\n [\/vc_column_text][vc_btn title=”Learn more” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D25|target:_blank” el_class=”btn-support-content-card”][\/vc_column_inner][\/vc_row_inner][\/vc_column][\/vc_row][vc_row css=”.vc_custom_1749641818341{padding-right: 5% !important;padding-left: 5% !important;background-color: #FFFFFF00 !important;}” el_id=”learn”][vc_column width=”1\/5″ offset=”vc_hidden-md vc_hidden-sm vc_hidden-xs”][\/vc_column][vc_column width=”4\/5″][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\n Responsible AI transparency |<\/span>Learn<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1747653683743{padding-top: 15px !important;padding-bottom: 15px !important;}”][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\n [\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner][vc_column_inner][\/vc_column_inner][\/vc_row_inner][vc_row_inner content_placement=”top” css=”.vc_custom_1746788105756{padding-top: 10px !important;padding-bottom: 40px !important;}”][vc_column_inner width=”1\/3″][vc_btn title=”Explore our approach” style=”outline-custom” outline_custom_color=”#2D231E” outline_custom_hover_background=”#2D231E” outline_custom_hover_text=”#FFFFFF” shape=”square” i_align=”right” i_icon_fontawesome=”fa fa-solid fa-square-arrow-up-right” css_animation=”none” css=”” add_icon=”true” link=”url:https%3A%2F%2Faka.ms%2FResponsible-AI-Transparency-Report-2025%23page%3D27|target:_blank” el_class=”btn-learn-explore”][\/vc_column_inner][vc_column_inner width=”2\/3″][vc_column_text css=””]<\/p>\n [\/vc_column_text][vc_column_text css=””]From the beginning, Microsoft has committed to scaling our responsible AI program to meet the growing demand for this technology. For us, this means investing in research, working across sectors to advance effective global governance of AI, and tuning into a wide range of perspectives.[\/vc_column_text][\/vc_column_inner][\/vc_row_inner][vc_row_inner css=”.vc_custom_1746626336056{margin-bottom: -50px !important;}”][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750348680791{padding-top: 35px !important;}” el_class=”border”]<\/p>\n Throughout 2024, Microsoft researchers collaborated closely with our policy and engineering teams to push the frontiers of how we map, measure, and manage AI risks.<\/p>\n <\/p>\n Learn more<\/span> [\/vc_column_text][\/vc_column_inner][vc_column_inner width=”1\/3″][vc_column_text css_animation=”none” css=”.vc_custom_1750348701610{padding-top: 35px !important;}” el_class=”border”]<\/p>\n We are working with governments around the world to build globally coherent governance frameworks that enable organizations of all kinds to innovate with AI.<\/p>\n <\/p>\n
<\/a>
<\/a>[\/vc_column_text][vc_column_text css=”.vc_custom_1748992410680{margin-top: 50px !important;}”]<\/p>\nHow we build, support our customers, and grow<\/h2>\n
Key takeaways<\/h2>\n
Responsible
\nAI tooling<\/h3>\nApproach to
\ncompliance<\/h3>\nPre-deployment
\nreviews<\/h3>\nSensitive
\nuses of AI<\/h3>\nInvestments
\nin research<\/h3>\nCoherent governance frameworks<\/span><\/h3>\n
we build<\/span><\/p>\n
we decide<\/span><\/p>\n
we support<\/span><\/p>\n
we learn<\/span><\/p>\n
<\/a>
<\/a>
<\/a>
<\/a>[\/vc_column_text][\/vc_column_inner][\/vc_row_inner][\/vc_column][vc_column width=”4\/5″ css=”.vc_custom_1750165495077{padding-left: 8% !important;}”][vc_row_inner][vc_column_inner][vc_column_text css=””]<\/p>\nHow we build
\nAI responsibly<\/h2>\nHow we build generative AI systems and models responsibly<\/h2>\n
\n
\nMapping and prioritizing risks enables us to make informed decisions about mitigations and the appropriateness of an AI application for a given context.<\/span>[\/vc_column_text][vc_column_text css=”.vc_custom_1749656206012{padding-top: 5px !important;padding-right: 20px !important;padding-bottom: 20px !important;padding-left: 20px !important;background-color: #1B588D !important;}”]Measure:<\/strong>
\n<\/span><\/p>\n
\nManaging AI-related risks in 2024 elections<\/h2>\n
How we
\nmake decisions<\/h2>\nHow we make decisions about releasing generative AI systems and models<\/h2>\n
Deployment safety for generative AI systems and models<\/h3>\n
\n
<\/span><\/a><\/p>\n
\nSensitive Uses and Emerging Technologies program<\/h3>\n
\n
<\/span><\/a><\/p>\n
\n77% generative AI<\/span><\/h2>\n
\n
\nSafely deploying Phi small language models<\/h2>\n
\nSafely deploying Smart Impression<\/h2>\n
How we support
\nour customers<\/h2>\nHow we support our customers in building AI responsibly<\/h2>\n
AI Customer Commitments<\/h3>\n
\n
<\/span><\/a><\/p>\n
\nTooling to support customers<\/h3>\n
\n
<\/span><\/a><\/p>\n
\nTransparency to support customers<\/h3>\n
\n
<\/span><\/a><\/p>\n
\n
\nContent credentials on LinkedIn<\/h2>\n
How we learn,
\nevolve, and grow<\/h2>\nHow we learn, evolve, and grow in our responsible AI work<\/h2>\n
Investments in research<\/h3>\n
\n
<\/span><\/a><\/p>\n
\nAdvancing AI adoption through good governance<\/h3>\n
\n