{"id":791396,"date":"2021-11-16T08:00:19","date_gmt":"2021-11-16T16:00:19","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=791396"},"modified":"2021-11-16T12:40:50","modified_gmt":"2021-11-16T20:40:50","slug":"tutorial-best-practices-for-prioritizing-fairness-in-ai-systems","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/tutorial-best-practices-for-prioritizing-fairness-in-ai-systems\/","title":{"rendered":"Tutorial: Best practices for prioritizing fairness in AI systems"},"content":{"rendered":"
As artificial intelligence (AI) continues to transform people\u2019s lives, new opportunities raise new challenges. Most notably, when we assess the societal impact of AI systems, it\u2019s important to be aware of their benefits, which we should strive to amplify, and their harms, which we should work to reduce. Developing and deploying AI systems in a responsible manner means prioritizing fairness. This is especially important for AI systems that will be used in high-stakes domains like education, employment, finance, and healthcare. This tutorial will guide you through a variety of fairness-related harms caused by AI systems and their most common causes. We will then dive into the precautions we need to take to mitigate fairness-related harms when developing and deploying AI systems. Together, we\u2019ll explore examples of fairness-related harms and their causes; fairness dashboards for quantitatively assessing allocation harms and quality-of-service harms; and algorithms for mitigating fairness-related harms. We\u2019ll discuss when they should and shouldn\u2019t be used and their advantages and disadvantages.<\/p>\n
Resources:<\/strong><\/p>\n