Problem
The user needs to understand what the system can do.
Solution
Show a set of system outputs for the user to choose from.
Use when
- It is more efficient for the user to learn what the AI system can do from examples.
- A full introduction to the system’s capabilities is too complex to communicate efficiently.
- Often useful for systems dealing with images, audio, or video.
How
Show a preview of the most probable system outputs, based on the current state and input.
Select possible system outputs to display based on one or more considerations:
- Diversity – ensure the sample outputs are sufficiently diverse to demonstrate the range of system capabilities.
- Popularity – select among popular or trending outputs.
- Contextual relevance – for example, take into consideration current events or user goals.
User benefits
- Facilitates user understanding by showing rather than telling.
- Learn by doing: Enables the user to learn about system capabilities while using the system.
- May also enable the user to understand how well the AI system can do what it can do (Guideline 2) and disambiguate user intent (Guideline 10).
Common pitfalls
- Providing not enough or too many examples for the user’s particular context.
Note: Over-inflated user expectations have been shown to cause frustration and even product abandonment.
References
Over-inflated user expectations have been shown to cause frustration and even product abandonment:
- Jan Hartmann, Antonella De Angeli, and Alistair Sutcliffe. 2008. Framing the user experience: information biases on website quality judgement. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’08). Association for Computing Machinery, New York, NY, USA, 855–864. DOI:https://doi.org/10.1145/1357054.1357190
- Jaroslav Michalco, Jakob Grue Simonsen & Kasper Hornbæk (2015) An Exploration of the Relation Between Expectations and User Experience, International Journal of Human–Computer Interaction, 31:9, 603-617, DOI: 10.1080/10447318.2015.1065696
- Daniel S. Weld and Gagan Basal. 2018. Intelligible Artificial Intelligence
- P. Robinette, W. Li, R. Allen, A. M. Howard and A. R. Wagner, Overtrust of robots in emergency evacuation scenarios, 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, 2016, pp. 101-108, doi: 10.1109/HRI.2016.7451740.
Notes
While other patterns make use of examples, they are different from G1-E:
G10-A: Disambiguate before acting also uses examples, but offers them as possible options to clarify user intent
G11-F: Example-based explanations makes use of examples to explain a system action after that action has taken place.