Fairness-related harms in AI systems: Examples, assessment, and mitigation
AI has transformed modern life via previously unthinkable feats, from machines that can master the ancient board game Go and self-driving cars to developments we experience more routinely, such as virtual agents and personalized product recommendations. Simultaneously, these new opportunities have raised new challenges—most notably, challenges that have highlighted the potential for AI systems to cause fairness-related harms. Indeed, the fairness of AI systems is one of the key concerns facing society as AI continues to influence our lives in new ways.
In this webinar, Microsoft researchers Hanna Wallach and Miroslav Dudík will guide you through how AI systems can lead to a variety of fairness-related harms. They will then dive deeper into assessing and mitigating two specific types: allocation harms and quality-of-service harms. Allocation harms occur when AI systems allocate resources or opportunities in ways that can have significant negative impacts on people’s lives, often in high-stakes domains like education, employment, finance, and healthcare. Quality-of-service harms occur when AI systems, such as speech recognition or face detection systems, fail to provide a similar quality of service to different groups of people.
Together, you’ll explore:
- Examples of fairness-related harms and where these harms originate
- Assessment methods for allocation harms and quality-of-service harms
- Unfairness mitigation algorithms, including when they can and can’t be used and what their advantages and disadvantages are
Resource list:
- Microsoft’s RAI resource center (opens in new tab)
- Microsoft’s FATE research group (opens in new tab)
- Fairlearn toolkit (opens in new tab)
- Hanna Wallach (opens in new tab) (Researcher profile)
- Miro Dudik (opens in new tab) (Researcher profile)
*This on-demand webinar features a previously recorded Q&A session and open captioning.
Explore more Microsoft Research webinars: https://aka.ms/msrwebinars (opens in new tab)
- Date:
- Speakers:
- Hannah Wallach, Miro Dudik
- Affiliation:
- Microsoft Research
-
-
Miro Dudík
Sr Principal Researcher Manager
-
Hanna Wallach
Partner Research Manager
-
-
Watch Next
-
-
-
-
Effective Human-AI Decision-Making or Everyone: A Sisyphean Task?
Speakers:- Ujwal Gadiraju
-
Microsoft Research India - who we are.
Speakers:- Kalika Bali,
- Sriram Rajamani,
- Venkat Padmanabhan
-
-