{"id":633765,"date":"2020-02-16T14:49:28","date_gmt":"2020-01-30T00:21:23","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=633765"},"modified":"2023-03-02T11:29:57","modified_gmt":"2023-03-02T19:29:57","slug":"msrgamut","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/msrgamut\/","title":{"rendered":"MSRGamut"},"content":{"rendered":"
Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations. This has led to a rallying cry for model interpretability. Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools. Through an iterative design process with expert machine learning researchers and practitioners, we designed a visual analytics system, Gamut, to explore how interactive interfaces could better support model interpretation. Using Gamut as a probe, we investigated why and how professional data scientists interpret models, and how interface affordances can support data scientists in answering questions about model interpretability. Our investigation showed that interpretability is not a monolithic concept: data scientists have different reasons to interpret models and tailor explanations for specific audiences, often balancing competing concerns of simplicity and completeness. Participants also asked to use Gamut in their work, highlighting its potential to help data scientists understand their own data.<\/p>\n
The GAMUT prototype that was used as the design probe is available here (opens in new tab)<\/span><\/a>.<\/p>\n Fred Hohman, the lead author for the paper, published a medium article (opens in new tab)<\/span><\/a> describing this paper.<\/p>\n The basic concept of the application, shown below is to have separate, but linked visualizations of feature importance, global feature contributions, and instance level explanations.<\/p>\n Interacting with Gamut’s multiple coordinated views together.<\/p><\/div>\n This work was published in SIGCHI2019 under the following citation:<\/p>\n Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models (opens in new tab)<\/span><\/a> The models in the prototype are built by using the open source Interpret-ML library (opens in new tab)<\/span><\/a>, or with the PyGam package (opens in new tab)<\/span><\/a>.<\/p>\n The GAMut demonstration includes GAM (General Additive Models) built from several publicly available datasets:<\/p>\n GAMUT is a prototype to help people explore General Additive Models, an class of interpretable machine learning models. We’ve included an interactive web site which allows people to explore the models for different data sets. The prototype lets people explore the general curves of contribution for each feature of a model as well as a waterfall diagram that explains how a prediction is made for every single item in the dataset. Fred Hohman published a medium article describing this paper.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556,13563,13554],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-633765","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-data-platform-analytics","msr-research-area-human-computer-interaction","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2019-06-03","related-publications":[563616],"related-downloads":[],"related-videos":[749332],"related-groups":[550641],"related-events":[637194,577950],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[{"id":0,"name":"Link to interactive Prototype","content":"[caption id=\"attachment_633774\" align=\"alignnone\" width=\"1540\"]
\nFred Hohman, Andrew Head, Rich Caruana, Robert DeLine, Steven Drucker
\nACM Conference on Human Factors in Computing Systems (CHI). Glasgow, UK, 2019.<\/p>\nGeneral Additive Modelling software:<\/h2>\n
Dataset Attribution:<\/h2>\n
\n
\n
\nBelsley D.A., Kuh, E. and Welsch, R.E. (1980) Regression Diagnostics. Identifying Influential Data and Sources of Collinearity.<\/em>\u00a0New York: Wiley.<\/li>\n
\nRipley, B.D. (1996)\u00a0Pattern Recognition and Neural Networks.<\/em> Cambridge: Cambridge University Press.<\/li>\n
\nDiamond data obtained from AwesomeGems.com on July 28, 2005.<\/li>\n<\/ul>\n<\/li>\n\n
\n1. Hungarian Institute of Cardiology. Budapest: Andras Janosi, M.D.
\n2. University Hospital, Zurich, Switzerland: William Steinbrunn, M.D.
\n3. University Hospital, Basel, Switzerland: Matthias Pfisterer, M.D.
\n4. V.A. Medical Center, Long Beach and Cleveland Clinic Foundation: Robert Detrano, M.D., Ph.D.<\/li>\n
\nPaulo Cortez, University of Minho, Guimar\u00e3es, Portugal,\u00a0http:\/\/www3.dsi.uminho.pt\/pcortez (opens in new tab)<\/span><\/a>
\nA. Cerdeira, F. Almeida, T. Matos and J. Reis, Viticulture Commission of the Vinho Verde Region(CVRVV), Porto, Portugal
\n@2009<\/li>\n
\nShip Hydromechanics Laboratory, Maritime and Transport Technology Department, Technical University of Delft.<\/li>\n<\/ul>\n<\/li>\n
\nThis work was published in SIGCHI2019 under the following citation:
\nGamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models
\nFred Hohman, Andrew Head, Rich Caruana, Robert DeLine, Steven Drucker
\nACM Conference on Human Factors in Computing Systems (CHI). Glasgow, UK, 2019.<\/p>\n<\/a> The interactive prototype used for interpretable machine learning for General Additive Models.<\/a>[\/caption]"}],"slides":[],"related-researchers":[{"type":"guest","display_name":"Fred Hohman","user_id":633801,"people_section":"Section name 0","alias":""},{"type":"guest","display_name":"Andrew Head","user_id":633810,"people_section":"Section name 0","alias":""},{"type":"user_nicename","display_name":"Rich Caruana","user_id":33365,"people_section":"Section name 0","alias":"rcaruana"},{"type":"user_nicename","display_name":"Rob DeLine","user_id":33370,"people_section":"Section name 0","alias":"rdeline"},{"type":"user_nicename","display_name":"Steven Drucker","user_id":33564,"people_section":"Section name 0","alias":"sdrucker"},{"type":"user_nicename","display_name":"Dan Marshall","user_id":31536,"people_section":"Section name 0","alias":"danmar"}],"msr_research_lab":[199565],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/633765","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":14,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/633765\/revisions"}],"predecessor-version":[{"id":924435,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/633765\/revisions\/924435"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=633765"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=633765"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=633765"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=633765"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=633765"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}