{"id":631581,"date":"2020-01-21T10:05:05","date_gmt":"2020-01-21T18:05:05","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=631581"},"modified":"2020-01-21T10:06:19","modified_gmt":"2020-01-21T18:06:19","slug":"when-bias-begets-bias-a-source-of-negative-feedback-loops-in-ai-systems","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/when-bias-begets-bias-a-source-of-negative-feedback-loops-in-ai-systems\/","title":{"rendered":"When bias begets bias: A source of negative feedback loops in AI systems"},"content":{"rendered":"
Is bias in AI self-reinforcing? Decision-making systems that impact criminal justice, financial institutions, human resources, and many other areas often have bias. This is especially true of algorithmic systems that learn from historical data, which tends to reflect existing societal biases. In many high-stakes applications, like hiring and lending, these decision-making systems may even reshape the underlying populations. When the system is retrained on future data, it may become not less but more detrimental to historically disadvantaged groups. In order to build AI systems that are aligned with desirable long-term societal outcomes, we need to understand when and why such negative feedback loops occur, and we need to learn how to prevent them.<\/p>\n
We explored these negative feedback loops and related questions in our paper, \u201cThe Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally<\/a>,\u201d to be presented at the third annual ACM Conference on Fairness, Accountability, and Transparency (ACM FAT* 2020) in Barcelona, Spain<\/a>. This research started during my internship at Microsoft Research, and was joint work with wonderful collaborators: Ashia Wilson<\/a>, Nika Haghtalab<\/a> of Cornell University (who was a postdoctoral researcher at Microsoft Research at the time), Adam Tauman Kalai<\/a>, Christian Borgs<\/a>, and Jennifer Chayes<\/a>.<\/p>\n In this work, we consider an economic model of how individuals respond to a classification algorithm, focusing on settings where each individual desires a positive classification (in other words, a positive reward). This includes many important applications, such as hiring and school admissions, where the reward gained in these scenarios is an individual being hired or admitted.<\/p>\n We assume that people invest in a positive qualification, such as gaining job skills or achieving higher academic success, based on the expected gain for their demographic group under the company\u2019s or school\u2019s current hiring or admittance practices, which in our work translates into an assessment rule. In these situations, a person must decide whether it is worth investing in these qualifications, resulting in a binary outcome of answering \u201cyes\u201d or \u201cno.\u201d The decision to invest has a cost, such as tuition or time.<\/p>\n The assessment rule assesses the qualification of individuals based on their observable characteristics, such as resumes or SAT scores. It is frequently updated on the current distribution of individuals to maximize institutional benefit. We are interested in the long-term behavior of such dynamics: What is the assessment rule used by the institution at equilibrium? In this case, an equilibrium assessment rule is an assessment rule that remains the same even after individuals respond to it. By understanding the stable equilibria that the dynamics tend toward, we can characterize how individuals will invest in the long term.<\/p>\nExamining an economic model of individual response to institutions\u2019 assessment rules and finding stable equilibria<\/h3>\n