{"id":822961,"date":"2022-03-01T15:23:15","date_gmt":"2022-03-01T23:23:15","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&p=822961"},"modified":"2022-03-15T10:23:06","modified_gmt":"2022-03-15T17:23:06","slug":"explainability","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/explainability\/","title":{"rendered":"Explainability"},"content":{"rendered":"

Authors:
\nAlejandro Gutierrez Munoz (opens in new tab)<\/span><\/a>, Tommy Guy, (opens in new tab)<\/span><\/a> Sally Kellaway<\/h4>\n

Trust and understanding of AI Models\u2019 predictions through Customer Insights<\/strong><\/h3>\n

AI models are becoming a normal part of many business operations, led by advancement in AI technologies and the democratization of AI. While AI is increasingly important in decision making, it can be challenging to understand what influences the outcomes of AI models. Critical details like the information used as input, the influence of missing data, and use of unintended or sensitive input variables can all have an impact on a model\u2019s output. To use AI responsibly and to trust it enough to make decisions, we must have tools and processes in place to understand how the model is reaching its conclusions.<\/p>\n

Microsoft Dynamics 365 Customer Insights goes beyond just a predicted outcome and provides additional information that helps better understand the model and its predictions. Using the latest AI technologies, Customer Insights surfaces the main factors that drive our predictions. In this blog post, we will talk about how Customer Insights\u2019 out-of-the-box AI models enable enterprises to better understand and trust AI models, as well as what actions can be taken based on the additional model interpretability.<\/p>\n

\"Explainability

Figure 1: Explainability information on the results page of the Customer Lifetime Value Out of box model, designed to help you interpret model results.<\/em><\/p><\/div>\n

What is model interpretability and why is it important?<\/strong><\/h3>\n

AI models are sometimes described as black boxes that consume information and output a prediction \u2013 where the inner workings are unknown. This raises serious questions about our reliance on AI technology. Can the model\u2019s prediction be trusted? Does the prediction make sense? AI model interpretability has emerged over the last few years as an area of research with the goal of providing insights into how AI models reach decisions.<\/p>\n

AI models leverage information from the enterprise (data about customers, transactions, historic data, etc.) as inputs. We call these inputs features<\/strong>. Features are used by the model to determine the output. A way to achieve model interpretability is by using explainable AI, or model explainability, which are a set of techniques that describe which features influence a prediction. We\u2019ll talk about two approaches: local explainability that describes how the model arrived at a single prediction (say a single customer\u2019s churn score) and global explainability that describes which features are most useful to make all predictions. Before we describe how a model produces explainability output and how you should interpret it, we need to describe how we construct features from input data.<\/p>\n

AI Feature Design with Interpretability in mind<\/strong><\/h3>\n

AI models are trained using features, which are transformations of raw input data to make it easier for the model to use. These transformations are a standard part of the model development process.<\/p>\n

For instance, input data may be a list of transactions with dollar amounts, but a feature might be the number of transactions in the last thirty days and the average transaction value. (Many features summarize more than one input row.) Before features are created, raw input data needs to be prepared and \u201ccleaned\u201d. In a future post, we\u2019ll deep dive on data preparation and the role that model explainability plays in it.<\/p>\n

To provide a more concrete example of what a feature is and how they might be important to the model\u2019s prediction, take these two features that might help predict customer churn value: frequency of transactions<\/em> and number of product types<\/em> bought<\/em>. In a coffee shop, frequency of transactions<\/em> is likely a great predictor of continued patronage: the regulars who walk by every morning will likely continue to do so. But those regulars may always get the same thing: I always get a 12 oz black Americano and never get a mochaccino or a sandwich. That means that number of product types<\/em> I buy isn\u2019t a good predictor of my churn: I buy the same product, but I get it every morning.<\/p>\n

Conversely, the bank down the road may observe that I rarely visit the branch to transact. However, I\u2019ve got a mortgage, two bank accounts and a credit card with that bank. The bank\u2019s churn predictions might rely on the number of products\/services bought rather than frequency of buying a new product. Both models start with the same set of facts (frequency of transactions<\/em> and number of product types<\/em>) and predict the same thing (churn) but have learned to use different features to make accurate predictions. Model authors created a pair of features that might be useful, but the model ultimately decides how or whether to use those features based on the context.<\/p>\n

Feature design also requires understandable names for the features. If a user doesn\u2019t know what a feature means, then it\u2019s hard to understand what it means if the model thinks it\u2019s important! During feature construction, AI engineers work with Product Managers and Content Writers to create human-readable names for every feature. For example, a feature representing the average number of transactions for a customer in the last quarter could look something like \u2018avg_trans_last_3_months\u2019 in the data science experimentation environment. If we were to present features like this to business users, it could be difficult for them to understand exactly what that means.<\/p>\n

Explainability via Game Theory<\/strong><\/h3>\n

A main goal in model explainability is to understand the impact of including a feature in a model. For instance, one could train a model with all the features except one, then train a model with all features. The difference in accuracy of model predictions is a measure of the importance of the feature that was left out. If the model with the feature is much more accurate than the model without the feature, then the feature was very important.<\/p>\n

\"The

Figure 2: The basic idea to compute explainability is to understand each feature\u2019s contribution to the model\u2019s performance by comparing performance of the whole model to performance without the feature. In reality, we use Shapley values to identify each feature\u2019s contribution, including interactions, in one training cycle.<\/em><\/p><\/div>\n

There are nuances related to feature interaction (e.g., including city name and zip code may be redundant: removing one won\u2019t impact model performance but removing both would) but the basic idea remains the same: how much does including a feature contribute to model performance?<\/p>\n

With hundreds of features, it\u2019s too expensive to train a model leaving each feature out one by one. Instead, we use a concept called Shapley values (opens in new tab)<\/span><\/a> to identify feature contributions from a single training cycle. Shapley values are a technique from game theory, where the goal is to understand the gains and costs of several actors working in a coalition. In Machine Learning, the \u201cactors\u201d are features, and the Shapley Value algorithm can estimate each feature\u2019s contribution even when they interact with other features.<\/p>\n

If you are looking for (much!) more detail about Shapley analysis, a good place to start is this GitHub repository: GitHub – slundberg\/shap: A game theoretic approach to explain the output of any machine learning model. (opens in new tab)<\/span><\/a><\/p>\n

\"Shap

Figure 3: Shap Contributions to model\u2019s prediction<\/p><\/div>\n

Other types of models, like deep learning neural networks, require novel methods to discover the features contributions. Customer Insights\u2019 sentiment model uses a deep learning transformer model that uses thousands of features. To explain the impact of each feature we leverage a technique known as integrated gradients. (opens in new tab)<\/span><\/a> Most deep learning models are implemented using neural networks, which learn by fine-tuning weights of the connections between the neurons in the network. Integrated gradients evaluate these connections to explain how different inputs influence the results. This lets us measure which words in a sentence have the highest contribution to the final sentiment score.<\/p>\n

\"Record

Figure 4: Model level explainability information generated for the Sentiment Analysis model.<\/em><\/p><\/div>\n

<\/h6>\n

 <\/p>\n

\"Record

Figure 5:\u00a0 Record level explainability information generated by the Sentiment analysis model.<\/em><\/p><\/div>\n

How to leverage the interpretability of a model<\/strong><\/h3>\n

AI models will output predictions for each record. A record is an instance or sample of the set we want to predict a score for. For example, for a churn model in Customer Insights, each customer is a record to score. Explainability is first computed at the record level (local explainability), meaning we compute the impact of each feature on predicting the output for a single record. If we are interested in a particular set of records (e.g., I have a specific set of customer accounts I manage), or just a few examples to validate our intuitions as to what features might be important to the model, looking at local explainability makes sense. When we are interested in the main features across all scored records, we need to see the aggregated impact for each record, or its global explainability.<\/p>\n

\"Global

Figure 6:\u00a0 Global explainability example from the Churn model.<\/em><\/p><\/div>\n

<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n

Features can impact the score in a positive way or negative way. For instance, a high value on number of support interactions might make a customer more likely to churn by 13%, while more transactions per week might make the customer less likely to churn by 5%. In these cases, a high numerical value for each feature (support calls or transactions per week) has opposing effects on the churn outcome. Feature impact therefore needs to consider both magnitude (size of impact) and directionality (positive or negative).<\/p>\n

 <\/p>\n

\"Local

Figure 7: Local explainability example for the Business-to-Business Churn model.<\/em><\/p><\/div>\n

<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n
<\/h6>\n

Acting on explainability information<\/strong><\/h3>\n

Now that we have made the case for adding explainability as an important output of our AI models, the question is what do I do with this information? For model creators, adding explainability as part of feature design and model debugging is a very powerful tool, as it can highlight data issues from ingestion, clean-up process, transformations, etc. It also helps validate the behavior of the model early on: does the way the model make predictions pass a \u201csniff test\u201d where obviously important features are important in the model? For consumers of AI models, it helps validate their assumptions about what should be important to the model. It can also help inform you of particular trends and patterns to pay attention to in your customer base to inform next steps.<\/p>\n

Explainability is an integral part of providing more transparency to AI models, how they work, and why they make a particular prediction. Transparency is one of the core principles of Responsible AI, which we dive into more detail in a future blog post.<\/p>\n","protected":false},"excerpt":{"rendered":"

ML models can be a \u201cblack box\u201d that make decisions in ways that we don\u2019t understand. Model Explainability is a feature that helps practitioners’ probe how a model makes decisions so they can have more confidence in the results. <\/p>\n","protected":false},"author":41161,"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":804652,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-822961","msr-blog-post","type-msr-blog-post","status-publish","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":804652,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/822961","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/41161"}],"version-history":[{"count":30,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/822961\/revisions"}],"predecessor-version":[{"id":823582,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/822961\/revisions\/823582"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=822961"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=822961"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=822961"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=822961"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}