The Teachable AI Experiences team (Tai X) aims to innovate teachable AI systems that allow people near or far from the norm to create meaningful personalized experiences for themselves. What we ALL have in common is that we are unique. Millions of people find that they do not fit into one of the coarse-grained buckets that have become the technical underpinning of our AI technologies of today (See Research Talk: Bucket of Me). While we can attempt to shoehorn in cultural, economic, and ability diversity by expanding our datasets and tweaking our algorithms, ultimately, it doesn’t scale to everyone on the planet.
Teachable AI systems is a way to re-think the architecture of current AI systems to create truly inclusive, human-centric experiences. In the teachable paradigm, users provide example data or make choices to influence the configuration of the model (i.e., the priors) in order to shape their experience of the AI system with agency. To achieve compelling teachable experiences, we must drive innovation in machine learning. Few-shot learning (opens in new tab) is needed, for example, to reduce the number of examples required to personalize the model. Methods of interpretability (e.g. uncertainty quantification) support the design of effective human-AI feedback loops.
We are a multi-disciplinary team that includes design, machine learning, engineering and human-computer interaction. We work closely together to achieve innovative human-AI experiences and push the boundaries of our respective disciplines. We use a wide range of methods from ethnography (see CHI2018 paper (opens in new tab)), to prototype development and deployment (see CHI2021 paper (opens in new tab)), to dataset development (see ICCV2021 paper (opens in new tab)) and technical ML innovation (see NeurIPS2021 paper (opens in new tab)). We put a significant emphasis on taking the final step to release technology to the communities that we work with (see Code Jumper (opens in new tab) story). We also use our expertise gained from building out novel systems to make theoretical (ASSETS2021 Best Paper Nomination (opens in new tab)) and policy contributions (e.g. supporting Microsoft’s approach to responsible AI (opens in new tab)).
The lens that we take to our choice of application domains is inclusion. Our vision is to challenge the practice of using disability (or other marking) categories, such as blind, as a basis for designing experiences. Instead, we would like to enable the very diverse people in these categories to create experiences to match their own needs. We have done the majority of our research with the blind and low vision community, but are interested in all aspects of inclusion.