The HAX Toolkit story
Who we are
We’re human-AI interaction researchers and practitioners at Microsoft who focus on building responsible, human-centered AI. What is human-centered AI? It means ensuring that what we build benefits people and society and how we build begins and ends with people in mind.
Our goal is to empower people working in the broad range of roles needed to create fluid and responsible human-AI experiences by providing go-to resources spanning the end-to-end AI product development process. To that end, we make sure every tool we develop is grounded in observed needs, validated through rigorous research, and tested with real product teams.
Affiliated with Microsoft Research and Aether, Microsoft’s advisory body for AI Ethics and Effects in Engineering and Research, we help drive research and innovation for the company’s responsible AI initiative.
See this AI resource collection for more about the cutting-edge research and tools supporting responsible AI, with lists of publications, tutorials, podcasts, blogs, and on-demand webinars.
Want to get involved?
The HAX Toolkit is continuously evolving as we listen and learn from the community. We welcome contributions and feedback! Get in touch with us.
How the HAX Toolkit started
AI is fundamentally changing how we interact with computing systems. As advances in AI algorithms and capabilities continue, practitioners have been seeking guidance about how to design for probabilistic and adaptive systems driven by these advances: “[This is] the most ambiguous space I’ve ever worked in, in my years of working in design … There aren’t any real rules and we don’t have a lot of tools.”
To build a foundation for creating intuitive AI user experiences, we set out to synthesize more than two decades of knowledge from academics and practitioners in fields like human-computer interaction, user experience design, information retrieval, ubiquitous computing, human-robot interaction, and artificial intelligence.
In 2019, we introduced 18 generally applicable Guidelines for Human-AI Interaction. These guidelines were validated through multiple rounds of evaluation, including a user study with 49 design practitioners who tested the Guidelines against 20 popular AI-infused products, as described in our paper in the ACM CHI 2019 Conference on Human Factors in Computing Systems. Our studies also revealed gaps in our knowledge and highlighted opportunities for advancing the field.
As people began applying the guidelines to their AI products, we discovered new challenges. We learned that, because the Guidelines impact all aspects of an AI system, including what data to collect and how AI models should be trained, it was more cost effective when teams brought all disciplines to the table and planned for implementing them early in development. To facilitate these early planning conversations and drive team alignment, we co-developed the HAX Workbook with more than 40 AI practitioners in various roles and tested it with teams working on AI products or features.
We also learned that, because there are multiple ways of implementing any of the Guidelines, teams struggled to decide on the best approach for their specific product scenario. To bootstrap design ideation, we took a data-driven approach to distill common solutions from hundreds of examples of the Guidelines in everyday AI products into flexible design patterns. We validated the design patterns for usability and utility with UX designers working with AI. To accelerate the pace at which teams can create consistently high-quality AI experiences, we’ve shared these design patterns along with illustrative examples in our browsable and extensible HAX Design Library.
Through our engagements with AI teams, we also found that human-AI interaction issues are often caught late in product development, when they are typically more costly to fix. To shift human-AI interaction testing work earlier in the development process, we created a low-cost tool for systematically and interactively exploring common AI failure scenarios with input and feedback from 12 UX practitioners, as detailed in this CHI 2021 research paper. You can start using the HAX Playbook to proactively design for interaction failures in natural language scenarios or extend it to other AI scenarios on Github.
As we work with AI teams to create responsible human-AI experiences, we’ll continue to share our learnings through tools and guidance in the HAX Toolkit.