The Challenge of Crafting Intelligible Intelligence
- Daniel S. Weld ,
- Gagan Bansal
Communications of ACM |
Since Artificial Intelligence (AI) software uses techniques like deep lookahead search and stochastic optimization of huge neural net- works to fit mammoth datasets, it often results in complex behavior that is difficult for people to understand. Yet organizations are de- ploying AI algorithms in many mission-critical settings. To trust their behavior, we must make AI intelligible, either by using in- herently interpretable models or by developing new methods for explaining and controlling otherwise overwhelmingly complex de- cisions using local approximation, vocabulary alignment, and inter- active explanation. This paper argues that intelligibility is essential, surveys recent work on building such systems, and highlights key directions for research.