CueFlik: Interactive Concept Learning in Image Search
- James Fogarty ,
- Desney Tan ,
- Ashish Kapoor ,
- Simon Winder
CHI '08 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems |
Published by ACM Press
Web image search is difficult in part because a handful of keywords are generally insufficient for characterizing the visual properties of an image. Popular engines have begun to provide tags based on simple characteristics of images (such as tags for black and white images or images that contain a face), but such approaches are limited by the fact that it is unclear what tags end-users want to be able to use in examining Web image search results. This paper presents CueFlik, a Web image search application that allows end-users to quickly create their own rules for re-ranking images based on their visual characteristics. End-users can then re-rank any future Web image search results according to their rule. In an experiment we present in this paper, end-users quickly create effective rules for such concepts as “product photos”, “portraits of people”, and “clipart”. When asked to conceive of and create their own rules, participants create such rules as “sports action shot” with images from queries for “basketball” and “football”. CueFlik represents both a promising new approach to Web image search and an important study in end-user interactive machine learning.