{"id":494648,"date":"2018-07-17T10:46:26","date_gmt":"2018-07-17T17:46:26","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=494648"},"modified":"2018-08-03T10:55:39","modified_gmt":"2018-08-03T17:55:39","slug":"machine-learning-for-fair-decisions","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/machine-learning-for-fair-decisions\/","title":{"rendered":"Machine Learning for fair decisions"},"content":{"rendered":"
Over the past decade, machine learning systems have begun to play a key role in many high-stakes decisions: Who is interviewed for a job? Who is approved for a bank loan? Who receives parole? Who is admitted to a school?<\/p>\n
Human decision makers are susceptible to many forms of prejudice and bias, such as those rooted in gender and racial stereotypes. One might hope that machines would be able to make decisions more fairly than humans. However, news stories and numerous research studies have found that machine learning systems can inadvertently discriminate against minorities, historically disadvantaged populations and other groups.<\/p>\n
In essence, this is because machine learning systems are trained to replicate decisions present in the data with which they are trained and these decisions reflect society’s historical biases.<\/p>\n
Naturally, researchers want to mitigate these biases, but there are several challenges. For example, there are many different definitions of fairness. Should the same number of men and women be interviewed for a job or should the number of men and women interviewed reflect the proportions of men and women in the applicant pool? What about nonbinary applicants? Should machine learning systems even be used in hiring contexts? Answers to questions like these are non-trivial and often depend on societal context. On top of that, re-engineering existing machine learning pipelines to incorporate fairness considerations can be hard. How can you train a boosted-decision-tree classifier to respect specific gender proportions? What about other fairness definitions? What about training a two-layer neural network? Or a residual network? Each of these questions can require many months of research and engineering.<\/p>\n
Our work, outlined in a paper titled, \u201cA Reductions Approach to Fair Classification<\/a>,\u201d presented this month at the 35th International Conference on Machine Learning (ICML 2018<\/a>) in Stockholm, Sweden, focuses on some of these challenges, providing a provably and empirically sound method for turning any common classifier into a \u201cfair\u201d classifier according to any of a wide range of fairness definitions.<\/p>\n