Weight of Evidence as a Basis for Human-Oriented Explanations
Interpretability is an elusive but highly sought-after characteristic of modern machine learning methods. Recent work has focused on interpretability via explanations, which justify individual model predictions. In this work, we take a step towards reconciling machine explanations with those that humans produce and prefer by taking inspiration from the study of explanation in philosophy, cognitive science, and the social sciences. We identify key aspects in which these human explanations differ from current machine explanations, distill them into a list of desiderata, and formalize them into a framework via the notion of weight of evidence from information theory. Finally, we instantiate this framework in two simple applications and show it produces intuitive and comprehensible explanations.