{"id":685431,"date":"2020-08-18T08:38:21","date_gmt":"2020-08-18T15:38:21","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-group&p=685431"},"modified":"2022-01-19T14:03:22","modified_gmt":"2022-01-19T22:03:22","slug":"reliable-machine-learning","status":"publish","type":"msr-group","link":"https:\/\/www.microsoft.com\/en-us\/research\/group\/reliable-machine-learning\/","title":{"rendered":"Reliable Machine Learning"},"content":{"rendered":"
\n\t
\n\t\t
\n\t\t\t\t\t<\/div>\n\t\t\n\t\t
\n\t\t\t\n\t\t\t
\n\t\t\t\t\n\t\t\t\t
\n\t\t\t\t\t\n\t\t\t\t\t
\n\t\t\t\t\t\t
\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\t<\/span>\n\t\t\t\t\t\t\t\t\tReturn to Microsoft Research Lab – India\t\t\t\t\t\t\t\t<\/a>\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n

Reliable Machine Learning<\/h1>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n

High-stakes decision-making in areas like healthcare, finance and governance requires accountability for decisions and for how data is used in making decisions. Many concerns have been raised about whether machine learning (ML) models can meet these expectations. In many cases, ML model predictions have been found to be objectionable and violating their original expectations after deployment.<\/p>\n\n\n\n

A key reason is that ML models are often complex black-boxes and thus have varying, unknown failure modes that are revealed only after deployment: models fail to achieve the reported high accuracies, lead to unfair decisions, and sometimes provide predictions that are plain unacceptable given basic domain knowledge. To address these problems, there has been work on enhancing fairness, improving generalization to new data domains, and building explanations for an ML model. However, these three goals—fairness, stability and explanation—are often studied relatively independent of each other.<\/p>\n\n\n\n

Our group works on the unified questions of model stability, fairness and explanation. We believe that there are fundamental connections between stability (generalization), fairness, and explainability of an ML model. Having one without the other two is not useful: all three should be met for an ML model to deliver its stated objective in a high-stakes application. If a fair and explainable model is not stable across data distributions, its stated properties can vary over time and across domains. Similarly, stable and fair models that cannot be explained are difficult to debug or improve. And a stable and explainable model without fairness guarantees may be unacceptable for many applications.<\/p>\n\n\n\n

As a concrete example, consider adversarial examples<\/em>, small perturbations of input examples that make even a highly accurate ML model give incorrect predictions.<\/p>\n\n\n\n

  1. Adversarial examples can be used to regularize the training procedure and make a model robust to small perturbations of data (which is a special case of stability<\/em>).<\/li>
  2. Adversarial examples can be used as explanations by providing the minimal changes in the input that would alter the model prediction on it (counterfactual explanations<\/em>).<\/li>
  3. Adversarial examples that only change certain protected attributes like gender or race can be used to verify and optimize for fairness (fairness audit<\/em>).<\/li><\/ol>\n\n\n\n

    Browse our projects and publications for more details.<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"

    Our group works on the unified questions of model stability, fairness and explanation. We believe that there are fundamental connections between stability (generalization), fairness, and explainability of an ML model.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_group_start":"2020-04-01","footnotes":""},"research-area":[13556],"msr-group-type":[243694],"msr-locale":[268875],"msr-impact-theme":[],"class_list":["post-685431","msr-group","type-msr-group","status-publish","hentry","msr-research-area-artificial-intelligence","msr-group-type-group","msr-locale-en_us"],"msr_group_start":"2020-04-01","msr_detailed_description":"","msr_further_details":"","msr_hero_images":[],"msr_research_lab":[199562],"related-researchers":[{"type":"user_nicename","display_name":"Amit Deshpande","user_id":30988,"people_section":"Section name 0","alias":"amitdesh"},{"type":"user_nicename","display_name":"Navin Goyal","user_id":33063,"people_section":"Section name 0","alias":"navingo"},{"type":"user_nicename","display_name":"Ajay Manchepalli","user_id":30885,"people_section":"Section name 0","alias":"ajayma"},{"type":"user_nicename","display_name":"Amit Sharma","user_id":30997,"people_section":"Section name 0","alias":"amshar"}],"related-publications":[631332,672480,672594,709360,713995,714004,714016,756028,625998],"related-downloads":[597247,494567],"related-videos":[],"related-projects":[],"related-events":[],"related-opportunities":[],"related-posts":[],"tab-content":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/685431"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-group"}],"version-history":[{"count":6,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/685431\/revisions"}],"predecessor-version":[{"id":934047,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group\/685431\/revisions\/934047"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=685431"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=685431"},{"taxonomy":"msr-group-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-group-type?post=685431"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=685431"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=685431"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}