{"id":997074,"date":"2024-01-10T09:00:00","date_gmt":"2024-01-10T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/advancing-transparency-updates-on-responsible-ai-research\/"},"modified":"2024-01-10T08:36:06","modified_gmt":"2024-01-10T16:36:06","slug":"advancing-transparency-updates-on-responsible-ai-research","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/advancing-transparency-updates-on-responsible-ai-research\/","title":{"rendered":"Advancing transparency: Updates on responsible AI research"},"content":{"rendered":"\n

Editor\u2019s note:<\/strong> All papers referenced here represent collaborations throughout Microsoft and across academia and industry that include authors who contribute to Aether, the Microsoft internal advisory body for AI ethics and effects in engineering and research.<\/em><\/p>\n\n\n\n

\"Blue<\/figure>\n\n\n\n

A surge of generative AI models in the past year has fueled much discussion about the impact of artificial intelligence on human history. Advances in AI have indeed challenged thinking across industries, from considering how people will function in creative roles to effects in education, medicine, manufacturing, and more. Whether exploring impressive new capabilities of large language models (LLMs) such as GPT-4 or examining the spectrum of machine learning techniques already embedded in our daily lives, researchers agree on the importance of transparency. For society to appropriately benefit from this powerful technology, people must be given the means for understanding model behavior.  <\/p>\n\n\n\n

Transparency is a foundational principle of responsible, human-centered AI and is the bedrock of accountability. AI systems have a wide range of stakeholders: AI practitioners need transparency for evaluating data and model architecture so they can identify, measure, and mitigate potential failures; people using AI, expert and novice, must be able to understand the capabilities and limitations of AI systems; people affected by AI-assisted decision-making should have insights for redress<\/a> when necessary; and indirect stakeholders, such as residents of cities using smart technologies, need clarity about how AI deployment may affect them<\/a>.<\/p>\n\n\n\n

Providing transparency when working with staggeringly complex and often proprietary models must take different forms to meet the needs of people who work with either the model or the user interface. This article profiles a selection of recent efforts for advancing transparency and responsible AI (RAI) by researchers and engineers affiliated with Aether, the Microsoft advisory body for AI ethics and effects in engineering and research. This work includes investigating LLM capabilities<\/a> and exploring strategies for unlocking specialized-domain competencies<\/a> of these powerful models while urging transparency approaches<\/a> for both AI system developers and the people using these systems. Researchers are also working toward improving identification, measurement, and mitigation of AI harms while sharing practical guidance such as for red teaming LLM applications<\/a> and for privacy-preserving computation.<\/a> The goal of these efforts is to move from empirical findings to advancing the practice of responsible AI.<\/p>\n\n\n\n

\"Demo<\/a><\/figure>
\n

Toward user-centered algorithmic recourse<\/h4>\n\n\n\n

In this demo of GAM Coach, an example of an AI transparency approach, an interactive interface lets stakeholders in a loan allocation scenario understand how a model based its prediction and what factors they can change to meet their goals.<\/p>\n\n\n\n

\n
Watch the demo<\/a><\/div>\n<\/div>\n<\/div><\/div>\n\n\n\n

Related papers<\/h4>\n\n\n\n