VERSE: Voice. Exploration. Retrieval. Search.

People with visual impairments are expert users of audio interfaces, including voice-activated virtual assistants and screen readers. Through interviews and surveys of this population, we learned that virtual assistants are convenient and accessible, but lack the ability to deeply engage with content (for example, to read beyond the first sentence of a Wikipedia article), and the ability to present a quick overview of the information landscape (for example, to list other search results and search verticals). In contrast, traditional screen readers are powerful and allow for deeper engagement with content (when content is accessible), but at the cost of increased complexity and decreased walk-up-and-use convenience. Our prototype, VERSE (Voice Exploration, Retrieval, and SEarch), combines the positive aspects of virtual assistants and screen readers to better support free-form, voice-based web search. As with screen readers, VERSE addresses the need to provide shortcuts and accelerators for common actions. Specifically, VERSE allows users to perform gestures on a companion device such as a phone or smart watch. These companion devices are not strictly necessary, but help overcome the long activation phrases that can become tedious when repeated to smart speakers.

日期:
演讲者:
Adam Fourney
所属机构:
Microsoft Research

系列: Microsoft Research Faculty Summit