{"id":917625,"date":"2023-02-27T09:00:00","date_gmt":"2023-02-27T17:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?p=917625"},"modified":"2023-03-03T12:30:14","modified_gmt":"2023-03-03T20:30:14","slug":"responsible-ai-the-research-collaboration-behind-new-open-source-tools-offered-by-microsoft","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/responsible-ai-the-research-collaboration-behind-new-open-source-tools-offered-by-microsoft\/","title":{"rendered":"Responsible AI: The research collaboration behind new open-source tools offered by Microsoft"},"content":{"rendered":"\n
\"Flowchart<\/figure>\n\n\n\n

As computing and AI advancements spanning decades are enabling incredible opportunities for people and society, they\u2019re also raising questions about responsible development and deployment. For example, the machine learning models powering AI systems may not perform the same for everyone or every condition, potentially leading to harms related to safety, reliability, and fairness. Single metrics often used to represent model capability, such as overall accuracy, do little to demonstrate under which circumstances or for whom failure is more likely; meanwhile, common approaches to addressing failures, like adding more data and compute or increasing model size, don\u2019t get to the root of the problem. Plus, these blanket trial-and-error approaches can be resource intensive and financially costly.<\/p>\n\n\n\n

\n\t