{"id":1041459,"date":"2024-06-05T09:00:00","date_gmt":"2024-06-05T16:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=1041459"},"modified":"2024-06-05T08:11:46","modified_gmt":"2024-06-05T15:11:46","slug":"microsoft-at-facct-2024-advancing-responsible-ai-research-and-practice","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/microsoft-at-facct-2024-advancing-responsible-ai-research-and-practice\/","title":{"rendered":"Microsoft at FAccT 2024: Advancing responsible AI research and practice"},"content":{"rendered":"\n
\"Microsoft<\/figure>\n\n\n\n

The integration of AI and other computational technologies is becoming increasingly common in high-stakes sectors such as finance, healthcare, and government, where their capacity to influence critical decisions is growing. While these systems offer numerous benefits, they also introduce risks, such as entrenching systemic biases and reducing accountability. The ACM Conference on Fairness, Accountability, and Transparency<\/a> (ACM FaccT 2024) tackles these issues, bringing together experts from a wide range of disciplines who are committed to the responsible development of computational systems.<\/p>\n\n\n\n

Microsoft is proud to return as a sponsor of ACM FAccT 2024, underscoring our commitment to supporting research on responsible AI. We’re pleased to share that members of our team have taken on key roles in organizing the event, contributing to the program committee and serving as a program co-chair. Additionally, seven papers by Microsoft researchers and their collaborators have been accepted to the program, with \u201cAkal badi ya bias: An exploratory study of gender bias in Hindi language technology,\u201d receiving an award for Best Paper.\u00a0<\/p>\n\n\n\n

Collectively, these research projects emphasize the need for AI technologies that reflect the Microsoft Responsible AI<\/a> principles of accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security. They underscore the importance of addressing potential risks and harms associated with deployment and usage. This post highlights these advances.<\/p>\n\n\n\n\t

\n\t\t\n\n\t\t

\n\t\tMicrosoft research podcast<\/span>\n\t<\/p>\n\t\n\t

\n\t\t\t\t\t\t
\n\t\t\t\t\n\t\t\t\t\t\"Headshots\n\t\t\t\t<\/a>\n\t\t\t<\/div>\n\t\t\t\n\t\t\t
\n\n\t\t\t\t\t\t\t\t\t

Collaborators: Silica in space with Richard Black and Dexter Greene<\/h2>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

College freshman Dexter Greene and Microsoft research manager Richard Black discuss how technology that stores data in glass is supporting students as they expand earlier efforts to communicate what it means to be human to extraterrestrials.<\/p>\n\t\t\t\t\n\t\t\t\t\t\t\t\t

\n\t\t\t\t\t
\n\t\t\t\t\t\t\n\t\t\t\t\t\t\tListen now\t\t\t\t\t\t<\/a>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\t\t\t\t<\/div>\n\t<\/div>\n\t<\/div>\n\t\n\n\n

Paper highlights<\/h2>\n\n\n\n

A framework for exploring the consequences of AI-mediated enterprise knowledge access and identifying risks to workers<\/a><\/h3>\n\n\n\n

Anna Gausen,<\/em> <\/em><\/strong>Bhaskar Mitra<\/em><\/a>,<\/em> <\/em><\/strong>Si\u00e2n Lindley<\/em><\/a><\/p>\n\n\n\n

Recent AI developments, especially LLMs, are significantly impacting organizational knowledge access and reshaping workplaces. These AI systems pose risks due to their interaction with organizational power dynamics. This paper introduces the Consequence-Mechanism-Risk framework to help identify worker risks, categorizing them into issues related to value, power, and wellbeing. The framework aims to help practitioners mitigate these risks and apply it to other technologies, enabling better protection for workers.<\/p>\n\n\n\n

A structured regression approach for evaluating model performance across intersectional subgroups<\/a><\/h3>\n\n\n\n

Christine Herlihy, Kimberly Truong, <\/em>Alex Chouldechova<\/em><\/a>, <\/em>Miro Dud\u00edk<\/em><\/a><\/p>\n\n\n\n

Disaggregated evaluation is a process used in AI fairness assessment that measures AI system performance across different subgroups. These subgroups are defined by a mix of demographic or other sensitive attributes. However, the sample size for intersectional subgroups is often very small, leading to their exclusion from analysis. This work introduces a structured regression approach for more reliable system performance estimates in these subgroups. Tested on two publicly available datasets and several variants of semi-synthetic data, this method not only yielded more accurate results but also helped to identify key factors driving performance differences.\u202f<\/p>\n\n\n\n

Akal badi ya bias: An exploratory study of gender bias in Hindi language technology<\/a><\/h3>\n\n\n\n

Best Paper Award<\/em><\/p>\n\n\n\n

Rishav Hada, Safiya Husain, Varun Gumma, Harshita Diddee, Aditya Yadavalli, <\/em>Agrima Seth<\/em><\/a>, Nidhi Kulkarni, Ujwal Gadiraju, <\/em>Aditya Vashistha<\/em><\/a>, <\/em>Vivek Seshadri<\/em><\/a>, <\/em>Kalika Bali<\/em><\/a><\/p>\n\n\n\n

Existing research on gender bias in language technologies primarily focuses on English, often overlooking non-English languages. This paper introduces the first comprehensive study on gender bias in Hindi, the third most spoken language globally. Employing diverse techniques and field studies, the authors expose the limitations in current methodologies and emphasize the need for more context-specific and community-centered research. The findings deepen the understanding of gender bias in language technologies in Hindi and lay the groundwork for expanded research into other Indic languages.<\/p>\n\n\n\n

\u201cI\u2019m not sure, but\u2026\u201d: Examining the impact of large language models\u2019 uncertainty expression on user reliance and trust<\/a><\/h3>\n\n\n\n

Sunnie S. Y. Kim, <\/em>Q. Vera Liao<\/em><\/a>, <\/em>Mihaela Vorvoreanu<\/em><\/a>, Stephanie Ballard, <\/em>Jennifer Wortman Vaughan<\/em><\/a><\/p>\n\n\n\n

LLMs can produce convincing yet incorrect responses, potentially misleading users who rely on them for accuracy. To mitigate this issue, there have been recommendations for LLMs to communicate uncertainty in their responses. In a large-scale study on how users perceive and act on LLMs\u2019 expressions of uncertainty, participants were asked medical questions. The authors found that first-person uncertainty expressions (e.g., “I’m not sure, but…”) decreased participants’ confidence in the system and their tendency to agree with the system\u2019s answers, while increasing the accuracy of their own answers. In contrast, more general uncertainty expressions (e.g., “It\u2019s unclear, but…”) were less effective. The findings stress the importance of more thorough user testing before deploying LLMs.<\/p>\n\n\n\n

Investigating and designing for trust in AI-powered code generation tools<\/a><\/h3>\n\n\n\n

Ruotong Wang, Ruijia Cheng, <\/em>Denae Ford<\/em><\/a>, <\/em>Tom Zimmermann<\/em><\/a><\/p>\n\n\n\n

As tools like GitHub Copilot gain popularity, understanding the trust software developers place in these applications becomes crucial for their adoption and responsible use. In a two-stage qualitative study, the authors interviewed 17 developers to understand the challenges they face in building trust in AI code-generation tools. Challenges identified include setting expectations, configuring tools, and validating suggestions. The authors also explore several design concepts to help developers establish appropriate trust and provide design recommendations for AI-powered code-generation tools.<\/p>\n\n\n\n

Less discriminatory algorithms<\/a><\/h3>\n\n\n\n

Emily Black, Logan Koepke, Pauline Kim, Solon Barocas<\/a>, Mingwei Hsu<\/em><\/p>\n\n\n\n

In fields such as housing, employment, and credit, organizations using algorithmic systems should seek to use less discriminatory alternatives. Research in computer science has shown that for any prediction problem, multiple algorithms can deliver the same level of accuracy but differ in their impacts across demographic groups. This phenomenon, known as model multiplicity, suggests that developers might be able to find an equally performant yet potentially less discriminatory alternative.<\/p>\n\n\n\n

Participation in the age of foundation models<\/a><\/h3>\n\n\n\n

Harini Suresh, Emily Tseng, Meg Young, <\/em>Mary Gray<\/em><\/a>, Emma Pierson, Karen Levy<\/em><\/p>\n\n\n\n

The rise of foundation models in public services brings both potential benefits and risks, including reinforcing power imbalances and harming marginalized groups. This paper explores how participatory AI\/ML methods, typically context-specific, can be adapted to these context-agnostic models to empower those most affected.<\/p>\n\n\n\n

Conference organizers from Microsoft<\/h2>\n\n\n\n

Program Co-Chair<\/h3>\n\n\n\n

Alexandra Olteanu<\/a> <\/p>\n\n\n\n

Program Committee<\/h3>\n\n\n\n

Steph Ballard<\/a>\u00a0
Solon Barocas<\/a>\u00a0
Su Lin Blodgett<\/a>*
Kate Crawford<\/a>\u00a0
Shipi Dhanorkar<\/a>\u00a0
Amy Heger<\/a>
Jake Hofman<\/a>*
Emre Kiciman<\/a>*
Vera Liao<\/a>*
Daniela Massiceti<\/a>\u00a0
Bhaskar Mitra<\/a>\u00a0
Besmira Nushi<\/a>*
Alexandra Olteanu<\/a>\u00a0
Amifa Raj<\/a>
Emily Sheng<\/a>\u00a0
Jennifer Wortman Vaughan<\/a>*
Mihaela Vorvoreanu<\/a>*
Daricia Wilkinson<\/a>

*Area Chairs<\/p>\n\n\n\n

Career opportunities<\/h2>\n\n\n\n

Microsoft welcomes talented individuals across various roles at Microsoft Research, Azure Research, and other departments. We are always pushing the boundaries of computer systems to improve the scale, efficiency, and security of all our offerings. You can review our open research-related positions here<\/a>.<\/p>\n\n\n\n

<\/div>\n","protected":false},"excerpt":{"rendered":"

From studying how to identify gender bias in Hindi to uncovering AI-related risks for workers, Microsoft is making key contributions towards advancing the state of the art in responsible AI research. Check out their work at ACM FAccT 2024.<\/p>\n","protected":false},"author":42735,"featured_media":1041477,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"categories":[1],"tags":[],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[243984],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-1041459","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-research-blog","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-post-option-blog-homepage-featured"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[1038537],"related-researchers":[],"msr_type":"Post","featured_image_thumbnail":"\"Microsoft","byline":"","formattedDate":"June 5, 2024","formattedExcerpt":"From studying how to identify gender bias in Hindi to uncovering AI-related risks for workers, Microsoft is making key contributions towards advancing the state of the art in responsible AI research. Check out their work at ACM FAccT 2024.","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1041459"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/42735"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=1041459"}],"version-history":[{"count":23,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1041459\/revisions"}],"predecessor-version":[{"id":1043415,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/1041459\/revisions\/1043415"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/1041477"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1041459"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=1041459"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=1041459"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1041459"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=1041459"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=1041459"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1041459"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1041459"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1041459"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=1041459"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=1041459"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}