Creating AI glass boxes – Open sourcing a library to enable intelligibility in machine learning

Published

By , Senior Principal Researcher , Director, Research Engineering , Data Scientist , Principal Research Software Engineer , Product Marketing Manager

woman in the city holding and looking at an opened laptop (opens in new tab)

When AI systems impact people’s lives, it is critically important that people understand their behavior. By understanding their behavior, data scientists can properly debug their models. If able to reason how models behave, designers can convey that information to end users. If doctors, judges and other decision makers trust the models that underpin intelligent systems, they can make better decisions. More broadly, with fuller understanding of models, end users might more readily accept the products and solutions powered by AI, while growing regulator demands might be more easily satisfied.

In practice, achieving intelligibility can be complex and highly dependent on a host of variables and human factors, precluding anything resembling a “one-size-fits-all” approach. Intelligibility is an area of cutting-edge, interdisciplinary research, building on ideas from machine learning, psychology, human-computer interaction, and design.

Microsoft Research Podcast

AI Frontiers: Models and Systems with Ece Kamar

Ece Kamar explores short-term mitigation techniques to make these models viable components of the AI systems that give them purpose and shares the long-term research questions that will help maximize their value. 

Researchers at Microsoft (opens in new tab) have been working on how to create intelligible AI for years, and we are extremely excited to announce today that we are open sourcing under MIT license a software toolkit (opens in new tab)lnterpretML – that will enable developers to experiment with a variety of methods for explaining models and systems. InterpretML implements a number of intelligible models—including Explainable Boosting Machine (an improvement over generalized additive models ), and several methods for generating explanations of the behavior of black-box models or their individual predictions.

By having an easy way to access many intelligibility methods, developers will be able to compare and contrast the explanations produced by different methods, and to select methods that best suit their needs. Such comparisons can also help data scientists understand how much to trust the explanations by checking for consistency between methods.

We are looking forward to engaging with the open-source community in continuing to develop InterpretML. If you are interested, we warmly invite you to join us on GitHub (opens in new tab).

Continue reading

See all blog posts