{"id":199731,"date":"2011-01-31T10:48:32","date_gmt":"2011-01-31T10:48:32","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/events\/techfest-2011\/"},"modified":"2017-01-27T12:28:01","modified_gmt":"2017-01-27T20:28:01","slug":"techfest-2011","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/techfest-2011\/","title":{"rendered":"TechFest 2011"},"content":{"rendered":"

The latest thinking.\u00a0 The freshest ideas.<\/p>\n","protected":false},"excerpt":{"rendered":"

TechFest is an annual event, for Microsoft employees and guests, that showcases the most exciting research from Microsoft Research’s locations around the world. Researchers share their latest work\u2014and the technologies emerging from those efforts. The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"msr_startdate":"2011-03-08","msr_enddate":"2011-03-08","msr_location":"Redmond, WA, U.S.","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":true,"footnotes":""},"research-area":[13562],"msr-region":[256048],"msr-event-type":[197941,197944],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-199731","msr-event","type-msr-event","status-publish","hentry","msr-research-area-computer-vision","msr-region-global","msr-event-type-conferences","msr-event-type-hosted-by-microsoft","msr-locale-en_us"],"msr_about":"The latest thinking.\u00a0 The freshest ideas.","tab-content":[{"id":0,"name":"Summary","content":"TechFest is an annual event, for Microsoft employees and guests, that showcases the most exciting research from Microsoft Research's locations around the world.\u00a0 Researchers share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.\r\n\r\nWe invite you to explore the projects and\u00a0watch the videos.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.\r\n\r\n[accordion]\r\n\r\n[panel header=\"Feature Story\"]\r\n

TechFest Focus: Natural User Interfaces<\/h2>\r\nBy Douglas Gantenbein\u00a0| March 8, 2011 9:00 AM PT\r\n\r\nFor many people, using a computer still means using a keyboard and a mouse. But computers are becoming more like \u201cus\u201d\u2014better able to anticipate human needs, work with human preferences, even work on our behalf.\r\n\r\n\"techfest2011_nui\"Computers, in short, are moving rapidly toward widespread adoption of natural user interfaces (NUIs)\u2014interfaces that are more intuitive, that are easier to use, and that adapt to human habits and wishes, rather than forcing humans to adapt to computers. Microsoft has been a driving force behind the adoption of NUI technology. The wildly successful Kinect for Xbox 360<\/a> device\u2014launched in November 2010\u2014is a perfect example. It recognizes users, needs no controller to work, and understands what the user wants to do.\r\n\r\nIt won\u2019t be long before more and more devices work in similar fashion. Microsoft Research is working closely with Microsoft business units to develop new products that take advantage of NUI technology. In the months and years to come, a growing number of Microsoft products will recognize voices and gestures, read facial expressions, and make computing easier, more intuitive, and more productive.\r\n\r\nTechFest 2011, Microsoft Research\u2019s annual showcase of forward-looking computer-science technology, will feature several projects that show how the move toward NUIs is progressing. On March 9 and 10, thousands of Microsoft employees will have a chance to view the research on display, talk with the researchers involved, and seek ways to incorporate that work into new products that could be used by millions of people worldwide.\r\n\r\nNot all the TechFest projects are NUI-related, of course. Microsoft Research investigates the possibilities in dozens of computer-science areas. But quite a few of the demos to be shown do shine a light on natural user interfaces, and each points to a new way to see or interact with the world. One demo shows how patients\u2019 medical images can be interpreted automatically, enhancing considerably the efficiency of a physician\u2019s work. One literally creates a new world\u2014instantly converting real objects into digital 3-D objects that can be manipulated by a real human hand. A third acts as a virtual drawing coach to would-be artists. And yet another enables a simple digital stylus to understand whether a person wants to draw with it, paint with it, or, perhaps, even play it like a saxophone.\r\n

Semantic Understanding of Medical Images<\/h2>\r\nHealthcare professionals today are overwhelmed with the amount of medical imagery. X-rays, MRIs, CT, ultrasound, PET scans\u2014all are growing more common as diagnostic tools.\r\n\r\n\"carotids\"But the sheer volume of these images also makes it more difficult to read and understand them in a timely fashion. To help make medical images easier to read and analyze, a team from Microsoft Research Cambridge<\/a> has created InnerEye<\/a>, a research project that uses the latest machine-learning techniques to speed image interpretation and improve diagnostic accuracy. InnerEye also has implications for improved treatments, such as enabling radiation oncologists to target treatment to tumors more precisely in sensitive areas such as the brain.\r\n\r\nIn the case of radiation therapy, it can take hours for a radiation oncologist to outline the edge of tumors and healthy organs to be protected. InnerEye\u2014developed by researcher Antonio Criminisi<\/a> and a team of colleagues that included Andrew Blake, Ender Konukoglu, Ben Glocker, Abigail Sellen<\/a>, Toby Sharp<\/a>, and Jamie Shotton<\/a>\u2014greatly reduces the time needed to delineate accurately the boundaries of anatomical structures of interest in 3-D.\r\n\r\nTo use InnerEye, a radiologist or clinician uses a computer pointer on a screen image of a medical scan to highlight a part of the body that requires treatment. InnerEye then employs algorithms developed by Criminisi and his colleagues to accurately define the 3-D surface of the selected organ. In the resulting image, the highlighted organ\u2014a kidney, for instance, or even a complete aorta\u2014seems to almost leap from the rest of the image. The organ delineation offers a quick way of assessing things such as organ volume, tissue density, and other information that aids diagnosis.\r\n\r\nInnerEye also enables extremely fast, intuitive visual navigation and inspection of 3-D images. A physician can navigate to an optimized view of the heart simply by clicking on the word \u201cheart,\u201d because the system already knows where each organ is. This yields considerable time savings, with big economic implications.\r\n\r\nThe InnerEye project team also is investigating the use of Kinect in the operating theater. Surgeons often wish to view a patient\u2019s previously acquired CT or MR scans, but touching a mouse or keyboard could introduce germs. The InnerEye technology and Kinect help by automatically interpreting the surgeon\u2019s hand gestures. This enables the surgeon to navigate naturally through the patient\u2019s images.\r\n\r\nInnerEye has numerous potential applications in health care. Its automatic image analysis promises to make the work of surgeons, radiologists, and clinicians much more efficient\u2014and, possibly, more accurate. In cancer treatment, InnerEye could be used to evaluate a tumor quickly and compare it in size and shape with earlier images. The technology also could be used to help assess the number and location of brain lesions caused by multiple sclerosis.\r\n

Blurring the Line Between the Real and the Virtual<\/h2>\r\nBreaking down the barrier between the real world and the virtual world is a staple of science fiction\u2014Avatar and The Matrix are but two recent examples. But technology is coming closer to actually blurring the line.\r\n\r\n\"mirage_blocks\"Microsoft Research Redmond<\/a> researcher Hrvoje Benko<\/a> and senior researcher Andy Wilson<\/a> have taken a step toward making the virtual real with a project called MirageBlocks. Its aim is to simplify the process of digitally capturing images of everyday objects and to convert them instantaneously to 3-D images. The goal is to create a virtual mirror of the physical world, one so readily understood that a MirageBlocks user could take an image of a brick and use it to create a virtual castle\u2014brick by brick.\r\n\r\nCapturing and visualizing objects in 3-D long has fascinated scientists, but new technology makes it more feasible. In particular, Kinect for Xbox 360 gave Benko and Wilson\u2014and intern Ricardo Jota\u2014an easy-to-use, $150 gadget that easily could capture the depth of an object with its multicamera design. Coupled with new-generation 3-D projectors and 3-D glasses, Kinect helps make MirageBlocks perhaps the most advanced tool ever for capturing and manipulating 3-D imagery.\r\n\r\nThe MirageBlocks environment consists of a Kinect device, an Acer H5360 3-D projector, and Nvidia 3D Vision glasses synchronized to the projector\u2019s frame rate. The Kinect captures the object image and tracks the user\u2019s head position so that the virtual image is shown to the user with the correct perspective.\r\n\r\nUsers enter MirageBlocks\u2019 virtual world by placing an object on a table top, where it is captured by the Kinect\u2019s cameras. The object is instantly digitized and projected back into the workspace as a 3-D virtual image. The user then can move or rotate the virtual object using an actual hand or a numbered keypad. A user can take duplicate objects, or different objects, to construct a virtual 3-D model. To the user, the virtual objects have the same depth and size as their physical counterparts.\r\n\r\nMirageBlocks has several real-world applications. It could apply an entirely new dimension to simulation games, enabling game players to create custom models or devices from a few digitized pieces or to digitize any object and place it in a virtual game. MirageBlocks\u2019 technology could change online shopping, enabling the projection of 3-D representations of an object. It could transform teleconferencing, enabling participants to examine and manipulate 3-D representations of products or prototypes. It might even be useful in health care\u2014an emergency-room physician, for instance, could use a 3-D image of a limb with a broken bone to correctly align the break.\r\n

Giving the Artistically Challenged a Helping Hand<\/h2>\r\nIt\u2019s fair to say that most people cannot draw well. But what if a computer could help by suggesting to the would-be artist certain lines to follow or shapes to create? That\u2019s the idea behind ShadowDraw, created by Larry Zitnick\u2014who works as a researcher in the Interactive Visual Media Group<\/a> at Microsoft Research Redmond<\/a>\u2014and principal researcher Michael Cohen, with help from intern Yong Jae Lee from the University of Texas at Austin.\r\n\r\n\"teasers\"In concept, ShadowDraw seems disarmingly simple. A user begins drawing an object\u2014a bicycle, for instance, or a face\u2014using a stylus-based Cintiq 21UX tablet. As the drawing progresses, ShadowDraw surmises the subject of the emerging drawing and begins to suggest refinements by generating a \u201cshadow\u201d behind the would-be artist\u2019s lines that resembles the drawn object. By taking advantage of ShadowDraw\u2019s suggestions, the user can create a more refined drawing than otherwise possible, while retaining the individuality of their pencil strokes and overall technique.\r\n\r\nThe seeming simplicity of ShadowDraw, though, belies the substantial computing power being harnessed behind the screen. ShadowDraw is, at its heart, a database of 30,000 images culled from the Internet and other public sources. Edges are extracted from these original photographic images to provide stroke suggestions to the user.\r\n\r\nThe main component created by the Microsoft Research team is an interactive drawing system that reacts to the user\u2019s pencil work in real time. ShadowDraw uses a novel, partial-matching approach that finds possible matches between different sub-sections of the user\u2019s drawing and the database of edge images. Think of ShadowDraw\u2019s behind-the-screen interface as a checkerboard\u2014each square where a user draws a line will generate its own set of possible matches that cumulatively vote on suggestions to help refine a user\u2019s work. The researchers also created a novel method for spatially blending the various stroke suggestions for the drawing.\r\n\r\nTo test ShadowDraw, Zitnick and his co-researchers enlisted eight men and eight women. Each was asked to draw five subjects\u2014a shoe, a bicycle, a butterfly, a face, and a rabbit\u2014with and without ShadowDraw. The rabbit image was a control\u2014there were no rabbits in the database. When using ShadowDraw, the subjects were told they could use the suggested renderings or ignore them. And each subject was given 30 minutes to complete 10 drawings.\r\n\r\nA panel of eight additional subjects judged the drawings on a scale of one to five, with one representing \u201cpoor\u201d and five \u201cgood.\u201d The panelists found that ShadowDraw was of significant help to people with average drawing skills\u2014their drawings were significantly improved by ShadowDraw. Interestingly, the subjects rated as having poor or good drawing skills, pre-ShadowDraw, saw little improvement. Zitnick says the poor artists were so bad that ShadowDraw couldn\u2019t even guess what they were attempting to draw. The good artists already had sufficient skills to draw the test objects accurately.\r\n

Enabling One Pen to Simulate Many<\/h2>\r\nHuman beings have developed dozens of ways to render images on a piece of paper, a canvas, or another drawing surface. Pens, pencils, paintbrushes, crayons, and more\u2014all can be used to create images or the written word.\r\n\r\n\"pen_hardware\"Each, however, is held in a slightly different way. That can seem natural when using the device itself\u2014people learn to manage a paintbrush in a way different from how they use a pen or a pencil. But those differences can present a challenge when attempting to work with a computer. A single digital stylus or pen can serve many functions, but to do so typically requires the user to hold the stylus in the same manner, regardless of the tool the stylus is mimicking.\r\n\r\nA Microsoft Research team aimed to find a better way to design a computer stylus. The team\u2014which included researcher Xiang Cao in the Human-Computer Interaction Group<\/a> at Microsoft Research Asia<\/a>; Shahram Izadi of Microsoft Research Cambridge; Benko<\/a> and Ken Hinckley<\/a>of Microsoft Research Redmond; Minghi Sun, a Microsoft Research Cambridge intern; Hyunyoung Song of the University of Maryland; and Fran\u00e7ois Guimbreti\u00e8re of Cornell University\u2014asked the question: How can a digital pen or stylus be as natural to use as the varied physical tools people employ? The solution, to be shown as part of a demo called Recognizing Pen Grips for Natural UI: A digital pen enhanced with a capacitive, multitouch sensor that knows where the user\u2019s hand touches the pen and an orientation sensor that knows at what angle the pen is held.\r\n\r\nWith that information, the digital pen can recognize different grips and automatically behave like the desired tool. If a user holds the digital pen like a paintbrush, the pen automatically behaves like a paintbrush. Hold it like a pen, it behaves like a pen\u2013with no need to manually turn a switch on the device or choose a different stylus mode.\r\n\r\nThe implications of the technology are considerable. Musical instruments such as flutes or saxophones and many other objects all build on similar shapes. A digital stylus with grip and orientation sensors conceivably could duplicate all, while enabling the user to hold the stylus in the manner that is most natural. Even game controllers could be adapted to modify their behavior depending on how they are held, whether as a driving device for auto-based games or as a weapon in games such as Halo<\/a>.\r\n\r\n[\/panel]\r\n\r\n[panel header=\"What is TechFest?\"]\r\n\r\n\"crowdthumbnail3.jpg\"The latest thinking.\u00a0\u00a0The freshest ideas<\/strong>.\r\n\r\nTechFest is an annual event, for Microsoft employees and guests,\u00a0that showcases the most exciting research from Microsoft Research's locations<\/a> around the world.\u00a0 Researchers\u00a0share their latest work\u2014and the technologies emerging from those efforts.\u00a0 The event provides a forum in which product teams and researchers can interact, fostering the transfer of groundbreaking technologies into Microsoft products.\r\n\r\nWe invite you to explore the projects, watch the videos, follow the buzz, and join the discussion on Facebook<\/a> and Twitter<\/a>.\u00a0 Immerse yourself in TechFest content and see how today\u2019s future will become tomorrow\u2019s reality.\r\n\r\n\"crowd2.jpg\"\r\n

In the News<\/h3>\r\n