Problem
The user needs to form realistic expectations about how well the system can do what it can do.
Solution
Communicate that the system is probabilistic and may make mistakes through intentional use of precision in numeric measurements.
Use when
- The user needs to understand the system might make mistakes.
- System behaviors and/or outputs can be qualified numerically.
- The system output is numerical.
- It is useful to qualify numerically individual system behaviors.
How
For system outputs and/or behaviors that are qualified numerically, match precision of numbers used in the UI to the precision of system performance.
Understand the level of numerical precision common to the domain.
Work with an AI/ML practitioner to get information about the level of precision the system can achieve and its confidence.
Communicate only the numeric measurements directly relevant to the user’s task or goal.
When deciding what level of numerical precision to communicate, consider how many decimal place values to show, how granular measurements should be (e.g., seconds, minutes).
To communicate high system performance, use more precise and granular measurements.
To communicate that the system may make mistakes, use less precise and granular measurements such as range or rounded numbers, or use expressions such as “a few minutes,” “about,” “approximately” (see pattern G2-A: Match the level of precision in UI communication with system performance – Language).
User benefits
- Enables the user to identify when the system is performing within or outside specifications.
- Enables the user to assess how much to trust the system’s output or behavior.
- Enables the user to monitor system performance.
Common pitfalls
- Overly precise numbers may lead the user to form over-inflated expectations about system performance.
- Insufficiently precise numbers may lead the user to underestimate system performance.
Note: Over-inflated user expectations have been shown to cause frustration and even product abandonment.
References
Model calibration:
- Alexandru Niculescu-Mizil and Rich Caruana. 2005. Predicting good probabilities with supervised learning. In Proceedings of the 22nd international conference on Machine learning (ICML ’05). Association for Computing Machinery, New York, NY, USA, 625–632. DOI: https://doi.org/10.1145/1102351.1102430
- Guo, C., Pleiss, G., Sun, Y. & Weinberger, K.Q.. (2017). On Calibration of Modern Neural Networks. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:1321-1330
Over-inflated user expectations have been shown to cause frustration and even product abandonment:
- Jan Hartmann, Antonella De Angeli, and Alistair Sutcliffe. 2008. Framing the user experience: information biases on website quality judgement. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08). Association for Computing Machinery, New York, NY, USA, 855–864. DOI:https://doi.org/10.1145/1357054.1357190
- Jaroslav Michalco, Jakob Grue Simonsen & Kasper Hornbæk (2015) An Exploration of the Relation Between Expectations and User Experience, International Journal of Human–Computer Interaction, 31:9, 603-617, DOI: 10.1080/10447318.2015.1065696
- Daniel S. Weld and Gagan Basal. 2018. Intelligible Artificial Intelligence
- P. Robinette, W. Li, R. Allen, A. M. Howard and A. R. Wagner, “Overtrust of robots in emergency evacuation scenarios,” 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, pp. 101-108, doi: 10.1109/HRI.2016.7451740.