{"id":168354,"date":"2013-08-01T00:00:00","date_gmt":"2013-08-01T00:00:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/msr-research-item\/toward-compound-navigation-tasks-on-mobiles-via-spatial-manipulation\/"},"modified":"2020-04-06T03:41:28","modified_gmt":"2020-04-06T10:41:28","slug":"toward-compound-navigation-tasks-on-mobiles-via-spatial-manipulation","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/toward-compound-navigation-tasks-on-mobiles-via-spatial-manipulation\/","title":{"rendered":"Toward compound navigation tasks on mobiles via spatial manipulation"},"content":{"rendered":"
We contrast the Chameleon Lens<\/em>, which uses 3D movement of a mobile device held in the nonpreferred hand to support panning and zooming, with the Pinch-Flick-Drag<\/em> metaphor of directly manipulating the view using multi-touch gestures. Lens-like approaches have significant potential because they can support navigation-selection, navigation-annotation, and other such compound tasks by off-loading navigation to the nonpreferred hand while the preferred hand annotates, marks a location, or draws a path on the screen. Our experimental results show that the Chameleon Lens is significantly slower than Pinch-Flick-Drag for the navigation subtask in isolation. But our studies also reveal that for navigation between a few known targets the lens performs significantly faster, that differences between the Chameleon Lens and Pinch-Flick-Drag rapidly diminish as users gain experience, and that in the context of a compound navigation-annotation task, the lens performs as well as Pinch-Flick-Drag despite its deficit for the navigation subtask itself.<\/p>\n Demonstration:<\/strong><\/p>\n\t