Many organizations have published principles to guide the responsible development and deployment of AI systems, but they are largely left to practitioners to put them into practice. Other organizations have therefore produced AI ethics checklists, including checklists for specific concepts, such as fairness.
Checklists in other domains, such as aviation, medicine, and structural engineering, have had well-documented success in saving lives and improving professional practices. But unless checklists are grounded in practitioners’ needs, they may be misused or ignored.
The fairness checklist research project explores how checklists may be designed to support the development of more fair AI products and services. To do this, we work with AI practitioners who the checklists are intended to support, to solicit their input on the checklist design and support the adoption and integration of the checklist into AI design, development, and deployment lifecycles.
Our first studies in this project have led to a fairness checklist co-designed with practitioners, as well as insights into how organizational and team processes shape how AI teams address fairness harms.