{"id":11325,"date":"2024-05-02T11:13:38","date_gmt":"2024-05-02T18:13:38","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/haxtoolkit\/?post_type=example&p=11325"},"modified":"2024-05-02T11:17:00","modified_gmt":"2024-05-02T18:17:00","slug":"alphacode-11a-local-explanations","status":"publish","type":"example","link":"https:\/\/www.microsoft.com\/en-us\/haxtoolkit\/example\/alphacode-11a-local-explanations\/","title":{"rendered":"AlphaCode | 11A: Local explanations"},"content":{"rendered":"
AlphaCode, a code generator by DeepMind, makes clear why the system did what it did (Guideline 11<\/a>) by providing token probabilities in the generated code solution (Pattern 11A<\/a>).<\/p>\n\n\n\n The techniques used in this example have the potential to foster appropriate reliance on AI, by reducing overreliance. Keep in mind that overreliance mitigations can backfire. Be sure to test such mitigations in context, with your AI system\u2019s actual users. Learn more about\u00a0overreliance on AI\u00a0<\/a>and\u00a0appropriate reliance on generative AI<\/a>.<\/p>\n\n\n\n