{"id":1024596,"date":"2024-05-02T15:38:06","date_gmt":"2024-05-02T22:38:06","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=1024596"},"modified":"2024-06-19T13:16:32","modified_gmt":"2024-06-19T20:16:32","slug":"appropriate-reliance-research-initiative","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/appropriate-reliance-research-initiative\/","title":{"rendered":"Appropriate Reliance Research Initiative"},"content":{"rendered":"\n
\"New<\/figure>\n\n\n\n

The Appropriate Reliance research initiative focuses on advancing research and creating practical solutions for fostering appropriate reliance on AI. <\/p>\n\n\n\n

Through appropriate reliance (opens in new tab)<\/span><\/a>, we aim to help people who use AI systems find a balance between over-trusting AI outputs and accepting them even when they are incorrect (aka overreliance on AI (opens in new tab)<\/span><\/a>) and under-trusting\u2014not accepting, or not using AI systems even when they perform well (aka underreliance on AI). <\/p>\n\n\n\n

Both over- and under-reliance on AI can have negative consequences such as harms resulting from poor performance of the human+AI team, or product abandonment. <\/p>\n\n\n\n

The research initiative brings together researchers and practitioners from across the company, with the goal of delivering:<\/p>\n\n\n\n