{"id":11349,"date":"2024-05-02T11:41:58","date_gmt":"2024-05-02T18:41:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/haxtoolkit\/?post_type=example&p=11349"},"modified":"2024-05-02T11:42:47","modified_gmt":"2024-05-02T18:42:47","slug":"alphacode-11d-map-system-input-attributes-to-system-outputs","status":"publish","type":"example","link":"https:\/\/www.microsoft.com\/en-us\/haxtoolkit\/example\/alphacode-11d-map-system-input-attributes-to-system-outputs\/","title":{"rendered":"AlphaCode | 11D: Map system input attributes to system outputs"},"content":{"rendered":"
AlphaCode, a code generator by DeepMind makes clear why the system did what it did (Guideline 11<\/a>) by highlighting parts of the problem description that the model attended to for generating selected tokens in the code solution (Pattern 11D<\/a>).<\/p>\n\n\n\n The techniques used in this example have the potential to foster appropriate reliance on AI, by reducing overreliance. Keep in mind that overreliance mitigations can backfire. Be sure to test such mitigations in context, with your AI system\u2019s actual users. Learn more about\u00a0overreliance on AI\u00a0<\/a>and\u00a0appropriate reliance on generative AI<\/a>.<\/p>\n\n\n\n