Transparency and Intelligibility Throughout the Machine Learning Life Cycle

People play a central role in the machine learning life cycle. Consequently, building machine learning systems that are reliable, trustworthy, and fair requires that relevant stakeholders—including developers, users, and anybody affected by these systems—have at least a basic understanding of how they work.

In this webinar led by Microsoft researcher Jenn Wortman Vaughan, explore how to best incorporate transparency into the machine learning life cycle. Here, we will explore these three components of transparency (with examples): traceability, communication, and intelligibility.

The second part of this webinar dives deeper into intelligibility. Building on recent research, we explore the importance of evaluating methods for achieving intelligibility in context with relevant stakeholders, how to empirically test whether intelligibility techniques let users achieve their goals, and why we should expand our concept of intelligibility beyond machine learning models to other aspects of machine learning systems, such as datasets and performance metrics.

Together, you’ll explore:

-Traceability: documenting goals, definitions, design choices, and assumptions of machine learning systems
-Communication: being open about the ways machine learning technology is used and the limitations of this technology
-Intelligibility: giving people the ability to understand and monitor the behavior of machine learning systems to the extent necessary to achieve their goals
-The intelligible machine learning landscape: the diverse ways that needs for intelligibility can arise and techniques for achieving and evaluating intelligibility that have been proposed in the machine learning literature

日期:
演讲者:
Jenn Wortman Vaughan
所属机构:
Microsoft Research