CHI Part 1: Interacting through touch, virtually and physically

Published

A preview of the ACM CHI Conference on Human Factors in Computing Systems

By Justin Cranshaw, Researcher, Microsoft Research

CHI Part 1: Interacting through touch, virtually and physically

Human-computer interaction (HCI) research at Microsoft provides a human-centered perspective on the future of computing, reshaping the roles computers play in our lives and propelling society forward through the tools, systems, and methods we investigate and invent. Microsoft Research’s HCI team has expertise on a wide range of domain-specialties and disciplines, providing guidance to much of the work we do at Microsoft, in both product and research.

Spotlight: Event Series

Microsoft Research Forum

Join us for a continuous exchange of ideas about research in the era of general AI. Watch the first four episodes on demand.

One of the premier international venues for sharing academic work in HCI is the ACM CHI Conference on Human Factors in Computing Systems (or CHI, as it’s more commonly called). This year’s conference will be held May 6–11 in Denver, where 73 of us at Microsoft contributed to 37 papers, with 8 papers receiving awards.  In this post and the next, I’ll share some of the major themes I see in the work my Microsoft colleagues are presenting at CHI this year.

For decades, “touch” has been a major center of focus for HCI research to build more natural methods of computational input. This year, CHI features some of the latest Microsoft advances in touch-related research, including work on multi-touch sensing digital pen input that mirrors how people naturally use their hands. For example, in the physical world, when people hold (“touch”) a document, they will often frame a particular region of interest between thumb and forefinger in their non-preferred hand, while they mark up the page with a pen in their preferred hand.

One paper, As We May Ink? Learning from Everyday Analog Pen Use to Improve Digital Ink Experiences, by Yann Riche, Nathalie Henry Riche, Ken Hinckley, Sarah Fuelling, Sarah Williams, and Sheri Panabaker, analyzes how people use freeform ink—on both “analog” paper and “digital” computer screens—in their everyday lives. This revealed a key insight, namely that ink is laden with many latent expressive attributes (such as the placement, size, and orientation of notations) that far exceed any textual transcription of the handwriting alone. In this and many other ways, ink uniquely affords the capture of fleeting thoughts, weaving together symbolic and figurative content, in a way that one could never achieve with a keyboard.

A second paper, WritLarge: Ink Unleashed by Unified Scope, Action, & Zoom, by Haijun Xia, Ken Hinckley, Michel Pahud, Xiao Tu, and William A.S. Buxton, earning an Honorable Mention award, introduces the system that puts these fundamental insights into action—on the mammoth 84-inch Microsoft Surface Hub, no less—to unleash the latent expressive power of ink. Using multitouch, the user can simply frame a portion of their “whiteboard” session between thumb and forefinger, and then act on such a selection (such as by copying, sharing, organizing, or otherwise transforming the content) using the pen wielded by the opposite hand.

And yet a third paper, Thumb + Pen Interaction on Tablets, by Ken Pfeuffer, Ken Hinckley, Michel Pahud, and William A.S. Buxton, addresses this notion of “pen + touch” interaction in laptop scenarios, such as when using a Surface tablet on the couch, where the nonpreferred hand must often hold the device itself. In this case, the thumb is available and sufficiently mobile to manipulate many controls, enabling a whole new space of “thumb + pen” interactions, such as allowing the user to readily shift use of a pen between annotation and cell-selection functions on an Excel spreadsheet.

HCI research on “touch” is not just about enabling new ways to interact with the virtual world through physical devices. Simulating the physicality of touch is also an important dimension for driving natural interactions in virtual worlds. Since the beginning of virtual reality (VR) development, the goal has been to immerse the users in a virtual world that will be as real to them as the physical one. While the capabilities of computer graphics and virtual reality display headsets, as well as 3D audio rendering, has progressed significantly during the last decade, our ability to simulate physicality in the virtual world is still in its infancy.

To help crack this problem, a team of Microsoft Research researchers—Eyal Ofek, Christian Holz, Hrvoje Benko, and Andy Wilson, along with Lung-Pan Chen, a visiting intern from HPI, Germany—has proposed a solution that can help us feel our environment while in VR. The authors built a system that tricks your senses by redirecting your hands, such that whenever you see yourself touching a virtual object, your hands are actually touching a physical geometry. Such a technology has the potential to make virtual worlds come alive in a tactile way.

Since Douglas Engelbart introduced the computer mouse in 1968, touch has been one of the major areas where HCI has advanced computation. Now, nearly 50 years later, Microsoft initiatives in HCI are helping reinvent our relationships with computation through new touch-based interactions and experiences.

Related:

Related publications

Continue reading

See all blog posts