À propos
Hanna Wallach (opens in new tab) is a partner research manager at de Microsoft Research New York City (opens in new tab). Her research focuses on issues of fairness, accountability, transparency, and ethics as they relate to AI and machine learning. She collaborates with researchers from machine learning, natural language processing, human–computer interaction, and science and technology studies, as well as lawyers and policy makers; her research integrates both qualitative and quantitative perspectives. Previously, she developed machine learning and natural language processing methods for analyzing the structure, content, and dynamics of social processes. She collaborated with political scientists, sociologists, journalists, and others to understand how organizations function by analyzing publicly available interaction data, including email networks, document collections, press releases, meeting transcripts, and news articles. This work was supported by several NSF (opens in new tab) grants, an IARPA (opens in new tab) grant, and a grant from the OJJDP (opens in new tab). The impact of Hanna’s work has been widely recognized. She has won best paper awards at AISTATS (opens in new tab), CHI (opens in new tab), and NAACL (opens in new tab). In 2014, she was named one of Glamour magazine’s “35 Women Under 35 Who Are Changing the Tech Industry (opens in new tab).” In 2016, she was named co-winner of the Borg Early Career Award (opens in new tab). She served as the senior program chair for the NeurIPS 2018 conference (opens in new tab) and as the general chair for the NeurIPS 2019 conference (opens in new tab). She currently serves on the NeurIPS Executive Board (opens in new tab), the ICML Board (opens in new tab), the FAccT Steering Committee (opens in new tab), the WiML Senior Advisory Council (opens in new tab), and the WiNLP Advisory Board (opens in new tab). Hanna is committed to increasing diversity in computing and has worked for almost two decades to address the underrepresentation of women, in particular. To that end, she co-founded two projects—the first of their kind—to increase women’s involvement in free and open source software development: Debian Women (opens in new tab) and the GNOME Women’s Summer Outreach Program (opens in new tab) (now Outreachy (opens in new tab)). She also co-founded the WiML Workshop (opens in new tab). Hanna holds a BA in computer science from the University of Cambridge, an MSc in cognitive science and machine learning from the University of Edinburgh, and a PhD in machine learning from the University of Cambridge.
Featured content
Fairness-related harms in AI systems: Examples, assessment, and mitigation webinar
In this webinar, Microsoft researchers Hanna Wallach and Miroslav Dudík will guide you through how AI systems can lead to a variety of fairness-related harms. They will then dive deeper into assessing and mitigating two specific types: allocation harms and quality-of-service harms. Allocation harms occur when AI systems allocate resources or opportunities in ways that can have significant negative impacts on people’s lives, often in high-stakes domains like education, employment, finance, and healthcare. Quality-of-service harms occur when AI systems, such as speech recognition or face detection systems, fail to provide a similar quality of service to different groups of people.