{"id":633765,"date":"2020-02-16T14:49:28","date_gmt":"2020-01-30T00:21:23","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=633765"},"modified":"2023-03-02T11:29:57","modified_gmt":"2023-03-02T19:29:57","slug":"msrgamut","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/msrgamut\/","title":{"rendered":"MSRGamut"},"content":{"rendered":"

Abstract<\/h2>\n

Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations. This has led to a rallying cry for model interpretability. Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools. Through an iterative design process with expert machine learning researchers and practitioners, we designed a visual analytics system, Gamut, to explore how interactive interfaces could better support model interpretation. Using Gamut as a probe, we investigated why and how professional data scientists interpret models, and how interface affordances can support data scientists in answering questions about model interpretability. Our investigation showed that interpretability is not a monolithic concept: data scientists have different reasons to interpret models and tailor explanations for specific audiences, often balancing competing concerns of simplicity and completeness. Participants also asked to use Gamut in their work, highlighting its potential to help data scientists understand their own data.<\/p>\n

The GAMUT prototype that was used as the design probe is available here (opens in new tab)<\/span><\/a>.<\/p>\n

Fred Hohman, the lead author for the paper, published a medium article (opens in new tab)<\/span><\/a> describing this paper.<\/p>\n

The basic concept of the application, shown below is to have separate, but linked visualizations of feature importance, global feature contributions, and instance level explanations.<\/p>\n

\"Figures

Interacting with Gamut’s multiple coordinated views together.<\/p><\/div>\n

This work was published in SIGCHI2019 under the following citation:<\/p>\n

Gamut: A Design Probe to Understand How Data Scientists Understand Machine Learning Models (opens in new tab)<\/span><\/a>
\nFred Hohman, Andrew Head, Rich Caruana, Robert DeLine, Steven Drucker
\nACM Conference on Human Factors in Computing Systems (CHI). Glasgow, UK, 2019.<\/p>\n

General Additive Modelling software:<\/h2>\n

The models in the prototype are built by using the open source Interpret-ML library (opens in new tab)<\/span><\/a>, or with the PyGam package (opens in new tab)<\/span><\/a>.<\/p>\n

Dataset Attribution:<\/h2>\n

The GAMut demonstration includes GAM (General Additive Models) built from several publicly available datasets:<\/p>\n