Using multi-touch, the content creator can simply frame a portion of their \u2018whiteboard\u2019 session between thumb and forefinger, and then act on such a selection (such as by copying, sharing, organizing, or otherwise transforming the content) using the pen wielded by the opposite hand.<\/p>\n
The pen and touch inputs thereby complement one another to afford a completely new\u2014and completely natural\u2014way of using freeform content to \u201cink at the speed of thought\u201d on Microsoft’s line of Surface devices.<\/p>\n
WritLarge enables creators to easily indicate content with one hand while acting on it with the other. This (for example) makes it easy to select specific ink strokes to recognize–or otherwise transform and re-structure–in a rich variety of ways.<\/p><\/div>\n
\n
Electronic whiteboards remain surprisingly difficult to use in the context of creativity support and design. A key problem is that once a designer places strokes and reference images on a canvas, actually doing anything useful with a subset of that content<\/em> involves numerous steps. Hence, scope<\/em><\/strong>\u2014that is, selection of content\u2014is a central concern, yet current techniques often require switching modes and encircling ink with a lengthy lasso, if not round-trips to the edge of the display. Only then can the user take action, such as to copy, refine, or re-interpret content.<\/p>\nSuch is the stilted nature of selection and action in the digital world. But it need not be so. By contrast, consider an everyday manual task such as sandpapering a piece of woodwork to hew off its rough edges. Here, we use our hands to grasp and bring to the fore\u2014that is, select<\/em>\u2014the portion of the work-object\u2014the wood\u2014that we want to refine. And because we are working with a tool\u2014the sandpaper\u2014the hand employed for this \u2018selection\u2019 sub-task is typically the non-preferred one, which skillfully manipulates the frame-of-reference\u00a0for the subsequent \u2018action\u2019 of sanding, a complementary sub-task articulated by the preferred hand.<\/p>\nTherefore, in contrast to the disjoint subtasks foisted on us by most interactions with computers, the above example shows how complementary manual activities lend a sense of flow that \u201cchunks\u201d selection and action into a continuous selection-action phrase<\/em>. By manipulating the workspace, the off-hand shifts the context of the actions to be applied, while the preferred hand brings different tools to bear\u2014such as sandpaper, file, or chisel\u2014as necessary.<\/p>\nThe main goal of WritLarge<\/em>, then, is to demonstrate similar continuity of action for electronic whiteboards. This motivated free-flowing, close-at-hand techniques to afford unification of selection and action via bimanual pen+touch interaction. To address selection, we designed a lightweight, integrated, and fast way for users to indicate scope, called the Zoom-Catcher<\/em> (shown above), as follows:<\/p>\nWith the thumb and forefinger of the non-preferred hand, the user just frames<\/strong> a portion of the canvas.<\/em><\/p><\/blockquote>\nThis sounds straightforward, and it is\u2014from the user\u2019s perspective. But this simple reframing of pinch-to-zoom affords a transparent, toolglass-like palette\u2014the Zoom-Catcher, manipulated by the nonpreferred hand\u2014which floats above the canvas, and the ink strokes and reference images thereupon. The Zoom-Catcher elegantly integrates numerous steps: it dovetails with pinch-to-zoom, affording multi-scale interaction; serves as mode switch, input filter, and an illumination of a portion of the canvas\u2014thereby doubling as a lightweight specification of scope; and once latched-in, it sets the stage for action by evoking commands at-hand, revealing context-appropriate functions in a location-independent manner, where the user can then act on them with the stylus (or a finger).<\/p>\n
Recognizing select content in WritLarge. The content creator can easily select, and act on, only the specific ink strokes of interest. The recognized results then preserve the position, orientation, and baseline orientation that are naturally expressed in the ink.<\/p><\/div>\n
\n
This example shows how content creators can easily select and organize select items into a grid layout.<\/p><\/div>\n
\n
Likewise, content creators can rewind time for a select portion of the canvas, allowing earlier states of a sketch to be retrieved, for example.<\/p><\/div>\n
\n
Building from this key insight, our work contributes unified selection and action by bringing together the following:<\/p>\n
\n- Lightweight specification of scope via the Zoom-Catcher.<\/li>\n
- In a way that continuously dovetails with pinch-to-zoom.<\/li>\n
- Thus affording unified, multi-scale selection and action with pen+touch, and both hands, in complementary roles.<\/li>\n
- These primitives support flexible, interpretation-rich, and easily-reversible representations of content, with a clear mental model of levels<\/strong> spatially organized along semantic<\/em>, structural<\/em>, and temporal<\/em> axes of movement<\/strong>.<\/li>\n
- Our approach thereby unleashes many natural attributes of ink, such as the position, size, orientation, textual content, and implicit structure of handwriting.<\/li>\n
- And in a way that leaves the user in complete control of what<\/em> gets recognized\u2014as well as when<\/em> recognition occurs\u2014so as not to break the flow of creative work.<\/li>\n
- A preliminary evaluation of the system with users suggests the combination of zooming and selection in this manner works extremely well, and is self-revealing for most users.<\/li>\n<\/ul>\n
Collectively these contributions aim to reduce the impedance mismatch of human and technology, thus enhancing the interactional fluency between a creator\u2019s ink strokes and the resulting representations at their command.<\/p>\n
\nKey collaborators on this project include Haijun Xia (University of Toronto) and Xiao Tu (Microsoft).<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"WritLarge is a prototype system from Microsoft Research for the 84″ Microsoft Surface Hub, a large electronic whiteboard supporting both pen and multi-touch input. WritLarge allows\u00a0creators to unleash the latent expressive power of ink in a compelling manner. Using multi-touch, the content creator can simply frame a portion of their \u2018whiteboard\u2019 session between thumb and […]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13554],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-379322","msr-project","type-msr-project","status-publish","hentry","msr-research-area-human-computer-interaction","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2017-05-09","related-publications":[379298],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[380660],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Ken Hinckley","user_id":32521,"people_section":"Group 1","alias":"kenh"},{"type":"user_nicename","display_name":"Michel Pahud","user_id":33007,"people_section":"Group 1","alias":"mpahud"}],"msr_research_lab":[],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/379322"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/379322\/revisions"}],"predecessor-version":[{"id":497009,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/379322\/revisions\/497009"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=379322"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=379322"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=379322"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=379322"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=379322"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}