{"id":35,"date":"2021-04-03T16:24:33","date_gmt":"2021-04-03T16:24:33","guid":{"rendered":"https:\/\/aitoolkit.test\/guideline\/make-clear-why-the-system-did-what-it-did\/"},"modified":"2023-05-24T19:29:41","modified_gmt":"2023-05-25T02:29:41","slug":"make-clear-why-the-system-did-what-it-did","status":"publish","type":"guideline","link":"https:\/\/www.microsoft.com\/en-us\/haxtoolkit\/guideline\/make-clear-why-the-system-did-what-it-did\/","title":{"rendered":"Make clear why the system did what it did"},"content":{"rendered":"
\n\t\n\n

Guideline 11: Make clear why the system did what it did<\/h2>\n\n\n\n
\"yellow<\/figure>\n\n<\/div>\n\n\n\n

Enable the user to access an explanation of why the AI system behaved as it did.<\/h2>\n\n\n\n

Make available an explanation for the AI system\u2019s actions\/outputs as appropriate.  <\/p>\n\n\n\n

Apply this guideline judiciously, keeping in mind that the mere presence of an explanation has been shown to increase user trust. This may cause over-reliance on the system and over-inflated expectations. Over-inflated expectations can lead to trusting an AI even when it could be wrong (automation bias) For setting expectations, see also Guideline 1 <\/a>and Guideline 2<\/a>. <\/p>\n\n\n\n

The explanation can be global, explaining the entire system, or local, explaining each output. Mix and match explanation patterns as needed, keeping in mind that not all explanations are equal in every scenario. Studies have shown that explanations’ content and design significantly impact whether they help or distract people from achieving their goals.  <\/p>\n\n\n\n

Use tools such as InterpretML<\/a> to improve model explainability. <\/p>\n\n\n\n

Use Guideline 11 patterns (mix and match as appropriate) to explain the AI system\u2019s behavior:<\/p>\n\n\n\n