Problem
The user needs to form realistic expectations about how well the system can do what it can do.
Solution
Communicate that the system is probabilistic and may make mistakes through intentional use of uncertainty in language.
Use when
- The stakes are low.
- The user needs to understand the system might make mistakes.
- The user is not interested in precise metrics of system performance.
- Precise system performance metrics are not available.
How
For system outputs and/or behaviors that are best qualified with language, match the words’ precision to precision of system performance.
Collaborate with an AI/ML practitioner to get information about the level of precision the system can achieve and its confidence.
When deciding what level of precision to communicate, consider how to communicate the degree of certainty with words such as, for example, “will” or “is” versus “may,” “might,” or “probably.”
To communicate high system performance, use more certain language.
To communicate that the system may make mistakes, use less certain language.
User benefits
- Speaks the user’s language and is easy for the user to interpret.
- Enables the user to assess how much to trust the system’s output or behavior.
Common pitfalls
- Overly certain language may lead the user to form over-inflated expectations about system performance.
- Language that is too uncertain may lead the user to underestimate system performance.
- The use of uncertain language may be subtle and difficult for the user to notice.
- Uncertain language is so ambiguous that it becomes meaningless.
- Language use treats all instances of system uncertainty as if they were the same.
Note: Over-inflated user expectations have been shown to cause frustration and even product abandonment.
References
Over-inflated user expectations have been shown to cause frustration and even product abandonment:
- Jan Hartmann, Antonella De Angeli, and Alistair Sutcliffe. 2008. Framing the user experience: information biases on website quality judgement. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08). Association for Computing Machinery, New York, NY, USA, 855–864. DOI:https://doi.org/10.1145/1357054.1357190
- Jaroslav Michalco, Jakob Grue Simonsen & Kasper Hornbæk (2015) An Exploration of the Relation Between Expectations and User Experience, International Journal of Human–Computer Interaction, 31:9, 603-617, DOI: 10.1080/10447318.2015.1065696
- Daniel S. Weld and Gagan Basal. 2018. Intelligible Artificial Intelligence
- P. Robinette, W. Li, R. Allen, A. M. Howard and A. R. Wagner, Overtrust of robots in emergency evacuation scenarios, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, pp. 101-108, doi: 10.1109/HRI.2016.7451740.