Verification-focused explanations | 11A: Local explanations

Verification-focused explanations (Fok and Weld, 2023) make clear why the system did what it did (Guideline 11) by providing evidence that can help users verify AI output accuracy for complex visual-reasoning and open-domain QA tasks. (Pattern 11A).
The techniques used in this example have the potential to foster appropriate reliance on AI, by reducing overreliance. Keep in mind that overreliance mitigations can backfire. Be sure to test such mitigations in context, with your AI system’s actual users. Learn more about overreliance on AI and appropriate reliance on generative AI.
