{"id":619572,"date":"2019-11-04T02:57:32","date_gmt":"2019-11-04T10:57:32","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=619572"},"modified":"2021-12-08T02:19:47","modified_gmt":"2021-12-08T10:19:47","slug":"dice","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/dice\/","title":{"rendered":"DiCE: Diverse Counterfactual Explanations for Machine Learning Classifiers"},"content":{"rendered":"

How to explain a machine learning model such that the explanation is truthful to the model and yet interpretable to people?<\/em><\/p>\n

The main objective of DiCE<\/span> is to explain the predictions of ML-based systems that are used to inform decisions in societally critical domains such as finance, healthcare, education, and criminal justice. In these domains, it is important to provide explanations to all key stakeholders who interact with the ML model: model designers, decision-makers, decision-subjects, and decision-evaluators. Most explanation techniques, however, face an inherent tradeoff between fidelity and interpretability: a high-fidelity explanation for an ML model tends to be complex and hard to interpret, while an interpretable explanation is often inconsistent with the ML model it was meant to explain.\u00a0 <\/span><\/p>\n

Counterfactual explanations offer a promising alternative. Rather than approximate an ML model or rank features by their predictive importance, a CF explanation “interrogates” a model to find required changes that would flip the model’s decision. Specifically, DiCE<\/span> provides this information by showing the feature-perturbed versions of the same input who would have got a different outcome. For example, consider a person who applied for a loan and was rejected by the loan distribution algorithm of a financial company. DiCE<\/span> would show the person a diverse set of feature-perturbed versions of the same person who would have received the loan by the same ML model, e.g., “You would have received the loan if your income was higher by $10k”. In other words, a counterfactual explanation helps a decision-subject\u00a0 decide what\u00a0they should do next to obtain a desired outcome<\/i>\u00a0rather than providing them only with important features that contributed to the prediction. <\/span>In addition, CF explanations from DiCE are also useful to the decision-maker who can use them to evaluate the trustworthiness of a particular predicton from the ML model. Similarly, CF explanations over multiple inputs can be useful for decision-evaluators to evaluate criteria such as fairness, and by model developers to debug their models and prevent errors on new data.<\/p>\n

Two key challenges in generating CF explanations are diversity and feasibility. The DiCE project aims to constructs a universal engine that can be used to explain any machine learning in terms of feature perturbations. Current research focuses on ensuring that high-diversity CF explanations are produced, and that the generated CFs are also feasible with respect to an underlying causal model that generates the observed data.<\/p>\n

DiCE is available as an open-source project on GitHub (opens in new tab)<\/span><\/a>.<\/p>\n

 <\/p>\n","protected":false},"excerpt":{"rendered":"

The main objective of DiCE is to explain the predictions of ML-based systems that are used to inform decisions in societally critical domains such as finance, healthcare, education, and criminal justice. In these domains, it is important to provide explanations to all key stakeholders who interact with the ML model: model designers, decision-makers, decision-subjects, and decision-evaluators.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-619572","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2018-08-01","related-publications":[625998,631332,713995],"related-downloads":[619632],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Amit Sharma","user_id":30997,"people_section":"Section name 0","alias":"amshar"}],"msr_research_lab":[199562],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/619572"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/619572\/revisions"}],"predecessor-version":[{"id":621942,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/619572\/revisions\/621942"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=619572"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=619572"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=619572"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=619572"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=619572"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}