{"id":917721,"date":"2023-02-27T09:01:13","date_gmt":"2023-02-27T17:01:13","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=917721"},"modified":"2023-02-27T09:21:46","modified_gmt":"2023-02-27T17:21:46","slug":"responsible-ai-mitigations-and-tracker-new-open-source-tools-for-guiding-mitigations-in-responsible-ai-2","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/responsible-ai-mitigations-and-tracker-new-open-source-tools-for-guiding-mitigations-in-responsible-ai-2\/","title":{"rendered":"Responsible AI Mitigations and Tracker: New open-source tools for guiding mitigations in Responsible AI"},"content":{"rendered":"\n
Responsible AI Mitigations<\/strong>: https:\/\/github.com\/microsoft\/responsible-ai-toolbox-mitigations (opens in new tab)<\/span><\/a><\/p>\n\n\n\n Responsible AI Tracker<\/strong>: https:\/\/github.com\/microsoft\/responsible-ai-toolbox-tracker (opens in new tab)<\/span><\/a><\/p>\n\n\n\n Authors<\/strong>: Besmira Nushi<\/a> (Principal Researcher) and Rahee Ghosh Peshawaria<\/a> (Senior Program Manager)<\/p>\n\n\n\n <\/p>\n\n\n\n The goal of responsible AI is to create trustworthy AI systems that benefit people while mitigating harms, which can occur when AI systems fail to perform with fair, reliable, or safe outputs for various stakeholders. Practitioner-oriented tools in this space help with accelerating the model improvement lifecycle from identification to diagnosis and then mitigation of responsible AI concerns. This blog describes two new open-source tools in this space developed at Microsoft Research as part of the larger Responsible AI Toolbox (opens in new tab)<\/span><\/a> effort in collaboration with Azure Machine Learning and Aether, the Microsoft advisory body for AI ethics and effects in engineering and research:<\/p>\n\n\n\n Both new additions to the toolbox currently support structured tabular data.<\/p>\n\n\n\n Throughout the blog, you will learn how these tools fit in the everyday job of a data scientist, how they connect to other tools in the Responsible AI ecosystem, and how to use them for concrete problems in data science and machine learning. We will also use a concrete prediction scenario to illustrate main functionalities of both tools and tie all insights together.<\/p>\n\n\n\n Traditional methods of addressing failures can rely too heavily on a single metric for measuring model effectiveness and approach tackling problems that do arise with more data, more compute, bigger models, and better parameters. While adding more data or compute into the picture as a blanket approach is beneficial, addressing particular problems that negatively impact subsets of the data or cohorts requires a more systematic and cost-effective approach. Targeted model improvement encourages a systematic process of:<\/p>\n\n\n\n In this big picture, the Responsible AI Mitigations library helps data scientists not only implement but also customize mitigation steps according to failure modes and issues they might have found during identification and diagnosis. Responsible AI Tracker then helps with interactively tracking and comparing mitigation experiments, enabling data scientists to see where the model has improved and whether there are variations in performance for different data cohorts. Both tools are meant to be used in combination with already available tools such as the Responsible AI Dashboard (opens in new tab)<\/span><\/a> from the same toolbox, which supports failure mode identification and diagnosis.<\/p>\n\n\n\n\n
Targeted model improvement<\/h2>\n\n\n\n
\n