Objects that can be touched and manipulated with the Actuated 3-D Display.<\/p><\/div>\n
One project application consisted of three virtual 3-D boxes, each with different virtual weights and friction forces corresponding to their supposed material: stone, wood, and sponge. Users could push with a finger on the screen into the virtual space until they encountered one of the boxes, and the device simulated the appropriate resistance through force feedback as the user pushed at each box. The force-feedback monitor responds to convey the sensation of different materials: The stone block \u201cfeels\u201d hard to the touch and requires more force to push, while the sponge block is soft and easy to push.<\/p>\n
\u201cI had been interested in the notion of putting a robot behind something you could touch,\u201d Sinclair says. \u201cOriginally, I had wanted a robot arm with many degrees of freedom. But complexity, costs, and safety issues narrowed down the options to one dimension of movement. At that point, I was sure that others must have already looked into this scenario, but after looking at the literature, it turned out no one had done this.\u201d<\/p>\n
It also turned out that being limited to a robot armature with one dimension of movement\u2014the Z-axis of the applications\u2014provided valuable insights into how much or how little data humans need to detect the shape and type of object being touched.<\/p>\n
\u201cContour detection was a major component of the project,\u201d Pahud says. \u201cThe question was, by using your normal haptic sense, could you actually identify the type of object you were touching, even though it\u2019s being presented by a fairly crude device consisting of a 2-D touch screen and a robot arm that moves only in one dimension, forward and back along a track?\u201d<\/p>\n
To determine whether the device could simulate contours convincingly, an application presents the user with two rigid shapes of different depths, a cup and a ball. By changing the depth of the screen according to the user\u2019s touch input, the team was able to simulate the surface contour of the 3-D object. In contrast to the force-feedback behavior, this mode of operation can be thought of as setting the screen position with infinite resistance so that the user \u201cfeels\u201d the contour of the 3-D object by tracing a finger along its surface.<\/p>\n
The display moves in depth based on the user\u2019s finger position against a 3-D object.<\/p><\/div>\n
\u201cYour finger is always aware of motion,\u201d Pahud explains. \u201cAs your finger pushes on the touchscreen and the senses merge with stereo vision, if we do the convergence correctly and update the visuals constantly so that they correspond to your finger\u2019s depth perception, this is enough for your brain to accept the virtual world as real.\u201d<\/p>\n
Taking this experiment a step further, the team \u201cblindfolded\u201d subjects by making the screen blank. The goal was to see whether users could identify shapes by touch alone.<\/p>\n
\u201cThey couldn’t see anything, and the shapes were simple objects,\u201d Sinclair says. \u201cWe knew beforehand that complicated objects wouldn\u2019t work, but some of the objects were reasonably sophisticated\u2014a pyramid, a wedge, a cylinder. But I\u2019d say these results were the biggest and most pleasant surprise of the project.\u201d<\/p>\n
Pahud agrees.<\/p>\n
\u201cI was impressed with how many people got the shapes,\u201d he says. \u201cThere were even some subjects who were 100 percent correct. That was definitely a surprise.\u201d<\/p>\n
The project proved that with just a low-bandwidth data channel\u2014the finger\u2014it is possible to model surface contours. The low bandwidth is simply a function of the fact that, with a finger, the user only touches one point at a time, yet with sufficient haptic feedback as the finger moves, there is enough information to identify shapes.<\/p>\n
Idle Forces at Work<\/h2>\n
One feature of the Actuated 3-D Display with Haptic Feedback project that users might not notice but was essential to its function was implementation of a constant idle force.<\/p>\n
\u201cIt\u2019s a very lightweight force that pushes back to follow the finger and maintain constant contact,\u201d Sinclair explains. \u201cAt first, it feels as though you are touching a hard wall that\u2019s easy to push, but you get used to it very quickly because it supplies only a few ounces of force against the finger. Since touchscreen interactions require the user\u2019s finger to remain in contact with the surface, the main challenge of the idle mode is to ensure that the screen remains in contact with the fingertip regardless of the direction that the fingertip is moving, either away or toward the user.\u201d<\/p>\n
Thanks to this small idle force, the screen can follow the user\u2019s finger in depth excursions, both positive and negative, until a haptic force beyond the idle force is commanded, such as when touching and interacting with an object.<\/p>\n
The team also implemented four additional command modes: force, velocity, position, and detent,<\/em> where additional forces are added to the idle force depending on the position of the finger and the application requirements. Detent mode, for example, specifies a fixed-position command to the controller, which makes the screen remain exactly at a desired position, effectively canceling the idle force. This adds a brief resistance that creates a haptic signal for the user, analogous to the sensory click of a radio dial when turned off. The detent mode proved extremely useful for exploring volumetric data.<\/p>\nExploring Data Through Touch<\/h2>\n
From the first discussion about this work, Pahud envisioned a brain\u2014or, rather, a 3-D image of a brain, built from volumetric data.<\/p>\n
\u201cI could see an image of the front of a brain,\u201d he says, \u201cand pushing a finger through the layers of the brain to travel through the data. I could imagine receiving haptic feedback when you encountered an anomaly, such as a tumor, because we can change the haptic response based on what you touch. You could have different responses for when you touch soft tissue versus hard tissue, which makes for a very rich experience.\u201d<\/p>\n
In contrast to other project applications that apply to 3-D scenes, the volumetric-data-exploration application shows how movement and haptics can enhance interactions with 2-D data. Pahud implemented a volumetric medical-image browser that shows the MRI-scanned data of a human brain. By gently pushing on the screen, the user can explore the data using touch and view different image slices of the brain.<\/p>\n
The brain browser is not just for exploring data through touch. The prototype is meant to demonstrate how touch interactions can be deployed for meaningful work. For example, if the user is interested in a particular slice and wants to return to that spot later, touching an on-screen button with a non-pointing finger along the left or right side of the screen locks the screen position in place. With the screen locked, the user can use a fingertip to annotate the slice by drawing on the image or adding notes on the side.<\/p>\n
The interface provides a method to mark and save depth investigations of interest.<\/p><\/div>\n
To facilitate search and retrieval of annotated slices, the project implements a haptic detent to mark that slice. Thus, when the user\u2019s finger moves past an annotated slice, either coming or going, the detent provides tactile resistance to alert the user of an annotated slice. The user may continue on and push past the detent, but the detent has done its job by making it easier to find annotations through touch, unaided by visuals.<\/p>\n
There is no doubt that the kinesthetic approach creates a productive new paradigm that augments touchscreen interactions. Pahud and Sinclair see many opportunities for this type of haptic device.<\/p>\n
\u201cThere\u2019s always 3-D gaming,\u201d Sinclair says, \u201cbut also 3-D modeling, education, and medical. We anticipate improving the experience with crisper, more detailed feedback, such as texture.<\/p>\n","protected":false},"excerpt":{"rendered":"
Haptic technology, which simulates the sense of touch through tactile feedback mechanisms, has been described as \u201cdoing for the sense of touch what computer graphics does for vision.\u201d Haptics are already common in devices such as smartphones, where touch sensations such as clicks and vibrations enhance the user experience. When it comes to virtual reality, […]<\/p>\n","protected":false},"author":39507,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"categories":[194474,194480,194481],"tags":[211613,211616,211601,211610,211604,186495,211607,193639,193511],"research-area":[13563,13551,13554],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-285176","post","type-post","status-publish","format-standard","hentry","category-data-visulalization","category-graphics-and-multimedia","category-human-centered-computing","tag-actuated-3-d-display","tag-contour-detection","tag-haptic-technology","tag-kinesthetic-haptic-sense","tag-sense-of-touch","tag-stereo-vision","tag-tactile-feedback-mechanisms","tag-techfest-2013","tag-virtual-reality","msr-research-area-data-platform-analytics","msr-research-area-graphics-and-multimedia","msr-research-area-human-computer-interaction","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[199565],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[432870],"related-events":[],"related-researchers":[],"msr_type":"Post","byline":"","formattedDate":"July 1, 2013","formattedExcerpt":"Haptic technology, which simulates the sense of touch through tactile feedback mechanisms, has been described as \u201cdoing for the sense of touch what computer graphics does for vision.\u201d Haptics are already common in devices such as smartphones, where touch sensations such as clicks and vibrations…","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/285176"}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/39507"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=285176"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/285176\/revisions"}],"predecessor-version":[{"id":285215,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/285176\/revisions\/285215"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=285176"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=285176"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=285176"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=285176"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=285176"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=285176"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=285176"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=285176"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=285176"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=285176"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=285176"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}