{"id":612135,"date":"2019-10-03T11:26:39","date_gmt":"2019-10-03T18:26:39","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&p=612135"},"modified":"2019-10-03T11:26:39","modified_gmt":"2019-10-03T18:26:39","slug":"holotable","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/holotable\/","title":{"rendered":"HoloTable"},"content":{"rendered":"
<\/p>\n
HoloTable explored view-dependent rendering to simulate a 3D experience for visualizing data as a “holographic” (“above-screen”) or “volumetric” (“below-screen”) display.<\/p>\n
Head-tracking using a Kinect sensor provided depth-cues via parallax. Anaglyph rendering could provide stereo images. Additional depth-cues were provided using screen “reflections”.<\/p>\n
Interaction was via direct multitouch manipulation on a touchscreen, such as a Perceptive Pixel display.<\/p>\n