{"id":748126,"date":"2019-01-22T13:42:09","date_gmt":"2019-01-22T21:42:09","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&p=748126"},"modified":"2021-05-24T13:53:43","modified_gmt":"2021-05-24T20:53:43","slug":"machine-learning-and-fairness","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/machine-learning-and-fairness\/","title":{"rendered":"Machine Learning and Fairness"},"content":{"rendered":"

Originally a discipline limited to academic circles, machine learning is now increasingly mainstream, being used in more visible and impactful ways. While this growing field presents huge opportunities, it also comes with unique challenges, particularly regarding fairness.<\/p>\n

Nearly every stage of the machine learning pipeline\u2014from task definition and dataset construction to testing and deployment\u2014is vulnerable to biases that can cause a system to, at best, underserve users and, at worst, disadvantage already disadvantaged subpopulations.<\/p>\n

In this webinar led by Microsoft researchers Jenn Wortman Vaughan and Hanna Wallach, 15-year veterans of the machine learning field, you’ll learn how to make detecting and mitigating biases a first-order priority in your development and deployment of ML systems.<\/p>\n

Together, you’ll explore:<\/p>\n