News: Our slides from the FAccT Tutorial on Responsible AI Toolbox are available here.
ML algorithms and systems are often prone to severe bias and highly consequential failure modes that are not well understood. This project advances the methods, tools, and infrastructure for debugging and mitigating these failure modes so practitioners may act on them before deploying ML systems in the real world. The project is part of Responsible AI Toolbox (opens in new tab), a larger collaborative effort between Microsoft Research, AETHER, and Azure Machine Learning for integrating and building development tools for responsible AI.
The goal of this project is two-fold:
- Building tools that enable ML engineers to identify, diagnose, and mitigate problems quickly and systematically.
- Conducting research that supports the above processes by better understanding and improving algorithmic robustness and failure explainability for different model architectures and data types.
Recent releases:
- Responsible AI Tracker (opens in new tab)A Jupyter Lab extension for managing, tracking, and comparing different RAI mitigation experiments. The goal is to accelerate improvement iterations for ML practitioners by enabling them to experiment and compare mitigation results quickly.
- Responsible AI Mitigations library (opens in new tab) is the ML backend support for targeted mitigation steps that can be used in the Responsible AI Tracker or in any other RAI tool. The designed functionalities guide model improvement by targeting mitigations to errors that affect particular data cohorts, with the goal of reducing performance discrepancies across cohorts.
Past releases: In collaboration with Azure Machine Learning, AETHER, and the Mixed Reality Group, we have built the following Responsible AI tools:
- Error Analysis (opens in new tab) for identifying and diagnosing errors of ML models and systems.
- Responsible AI Dashboard as a one-stop shop dashboard for integrating together several Responsible AI tools on error analysis, interpretability, causality, fairness, and decision making. The dashboard builds upon other RAI offerings at Microsoft such as Error Analysis, Fairlearn, InterpretML, and EconML.
- BackwardCompatibilityML (opens in new tab) for training ML models that do not regress and do not introduce new errors. The tool also provides visualizations for model comparison.