{"id":393287,"date":"2017-07-05T14:03:17","date_gmt":"2017-07-05T21:03:17","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=393287"},"modified":"2023-03-21T16:54:15","modified_gmt":"2023-03-21T23:54:15","slug":"intelligible-interpretable-and-transparent-machine-learning","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/intelligible-interpretable-and-transparent-machine-learning\/","title":{"rendered":"Intelligible, Interpretable, and Transparent Machine Learning"},"content":{"rendered":"

The importance of intelligibility and transparency in machine learning<\/h2>\n

Most real datasets have hidden biases. Being able to detect the impact of the bias in the data on the model, and then to repair the model, is critical if we are going to deploy machine learning in applications that affect people\u2019s health, welfare, and social opportunities. This requires models that are intelligible.<\/p>\n

In machine learning, there is often a tradeoff between accuracy and intelligibility: the most accurate machine learning models usually are not very intelligible (for example, deep neural nets, boosted trees, random forests, and support vector machines), and the most intelligible models usually are less accurate (for example, linear or logistic regression). This tradeoff often limits the accuracy of models that can be applied in mission-critical applications such as healthcare, where being able to understand, validate, edit, and ultimately trust a learned model is important.<\/p>\n

At Microsoft, we have developed a learning method based on generalized additive models that is as accurate as full complexity models such as random forests, but which remains as intelligible as\u2014and in some cases is even more intelligible than\u2014models such linear and logistic regression. We\u2019ve done this by applying modern machine learning methods and computational horsepower to the problems of training accurate generalized additive models and modeling important pairwise interactions. For many datasets, the new learning method is just as accurate as any other, but far more intelligible.<\/p>\n

We\u2019ve applied transparent learning to problems in healthcare, such as diabetes, pneumonia, and 30-day hospital readmission risk prediction. We\u2019ve also applied the new method to important social problems such as recidivism prediction and credit scoring, where bias based on race, gender, and nationality are important issues to take into account.<\/p>\n

In addition to making transparent what the models have learned, the new learning methods also make it easier to edit the models to remove bias or other errors that may have been introduced in the learning process. This is important because it is not enough to just know that a model has learned something inappropriate, one must also have a way of repairing the model to fix the issue.<\/p>\n","protected":false},"excerpt":{"rendered":"

The importance of intelligibility and transparency in machine learning Most real datasets have hidden biases. Being able to detect the impact of the bias in the data on the model, and then to repair the model, is critical if we are going to deploy machine learning in applications that affect people\u2019s health, welfare, and social […]<\/p>\n","protected":false},"featured_media":395084,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-393287","msr-project","type-msr-project","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[393341,393359,393368],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Rich Caruana","user_id":33365,"people_section":"People","alias":"rcaruana"},{"type":"user_nicename","display_name":"Paul Koch","user_id":33207,"people_section":"People","alias":"paulkoch"},{"type":"user_nicename","display_name":"Nick Craswell","user_id":33088,"people_section":"People","alias":"nickcr"},{"type":"guest","display_name":"Tom Finley","user_id":396491,"people_section":"People","alias":""},{"type":"user_nicename","display_name":"Harsha Nori","user_id":41461,"people_section":"People","alias":"hanori"}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/393287"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":6,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/393287\/revisions"}],"predecessor-version":[{"id":400409,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/393287\/revisions\/400409"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/395084"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=393287"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=393287"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=393287"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=393287"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=393287"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}