DemoFest Booths
-
Presenters: Xilin Chen, Xiujuan Chai, and Guang Li, Chinese Academy of Sciences; Hanjing Li and Dandan Yin, Beijing Union University
Sign language, the primary means of communication for the hearing impaired, is only understandable to those who have learned the language—a situation that can lead to debilitating social isolation. This demo will show our primary efforts on sign language recognition and translation with Kinect. As Kinect offers an opportunity to capture action in RGB images and depth simultaneously, we apply Kinect to capture the signer’s actions and recognize sign language from both the trajectory and hand shape.
-
Presenters: Vuong Le and Thomas Huang, University of Illinois at Urbana-Champaign; Yinpeng Chen, Zicheng Liu, Philip A. Chou, and Zhengyou Zhang, Microsoft Research
The capture and animation of the human face and body in 3-D offer great opportunities for many applications in telepresence, online shopping, and video games. In this demo, we will show that a user’s face and body can be easily captured and animated in 3-D with simple setup (machine + webcam + Kinect sensor). The face and human body will be demonstrated separately. The face demo will show 3-D face tracking by using the RGB camera, face-based emotion recognition, and a text/emotion-driven avatar. The human body demo will automatically constructs a 3-D model of the user’s body, starting with the user standing in a relaxed pose in front of a Kinect sensor for a few seconds. Subsequently, the system allows the users to drive the 3-D body model by their own movements (avateering), as well as to generate animations with predefined movements.
-
Presenters: Shahram Izadi, Microsoft Research Cambridge; Stewart Tansley, Microsoft Research
Kinect for Windows (opens in new tab) gives computers eyes, ears, and the capacity to use them. With Kinect for Windows, thousands of businesses and academics are creating applications that put people first—allowing their customers to interact naturally with computers by simply gesturing and speaking. The latest update to the Kinect for Windows SDK adds new Kinect Interactions that comprehend natural gestures such as “grip” and “push,” and includes Kinect Fusion, a tool that creates 3-D reconstructions of people and objects in real time. In addition, resources such as the Human Interface Guidelines, and new samples—such as OpenCV and MATLAB—help developers build advanced Kinect for Windows applications by using common standard libraries.
-
Presenters: Steven Johnston, University of Southampton; Nicolas Villar and Scarlet Schwiderski-Grosche, Microsoft Research Cambridge
Microsoft .NET Gadgeteer (opens in new tab) is an open-source toolkit for prototyping electronic devices. Simply connect hardware modules together and program the device in C# or Visual Basic to bring your ideas in to reality. Once you are satisfied with the functionality, design an enclosure to create a custom device that is aesthetically pleasing and fully functional.
With nearly 100 hardware modules, 10 mainboards, and a collection of hardware manufacturers as well as community contributors, prototyping a device is simple, and for those specific projects .NET Gadgeteer is built with extensibility at its heart. Come and see how leading universities use .NET Gadgeteer for computer science, environmental science, and design.
-
Presenters: Bongshin Lee and Greg Smith, Microsoft Research
Microsoft continues to innovate in the natural user interface (NUI) space, constantly providing new modalities for interacting with computing systems. We propose applying our expertise in NUI to create new, more natural ways of accessing information that help people effectively explore and present their data. The freeform nature of sketch interaction lends itself to fast, natural interaction without the use of widgets or menus.
We are exploring the possibility of combining sketch-based interaction and computer-supported visualizations by bringing data to interactive whiteboards, enabling more fluid data exploration and presentation and extending the traditional advantages of the whiteboard. In this demo, we present a system called SketchInsight, which recognizes hand-drawn input to enable data exploration on interactive whiteboards. SketchInsight also makes it possible to tell stories with data through freeform sketching.
-
Presenters: Javier Porras Luraschi and Roman Snytsar, Microsoft Research
ChronoZoom (opens in new tab) is an open-source community project that is dedicated to authoring and visualizing the history of everything. Big History is the attempt to understand, in a unified, interdisciplinary way, the history of cosmos, Earth, life, and humanity. By using Big History as the story line, ChronoZoom seeks to bridge the gap between the humanities and sciences an enable all this information to be easily understandable and navigable. See how ChronoZoom enables users to browse through history and find data in the form of articles, images, video, sound, and other media.
-
Presenter: Steven Drucker, Microsoft Research
The natural user interface meets big data meets visualization: SandDance (opens in new tab) is a web-based visualization system that uses 3-D hardware acceleration to explore the relationships between hundreds of thousands of items. Arbitrary data tables can be loaded, and results can be filtered by using facets and displayed by using a variety of layouts. Natural-user-interaction techniques including multi-touch and gesture interactions are supported.
-
Presenters: Ratul Mahajan, AJ Brush, Arjmand Samuel, and Danny Huang, Microsoft Research
In this demo, we will demonstrate HomeOS (opens in new tab) platform connected to a number of devices simulating the home environment. These devices will be collecting data and uploading it to the cloud. We will demonstrate the Lab of Things monitoring portal and walk the attendees through some of the key scenarios in the home.
While many connected devices exist, people struggle with setup, configuration, or even knowing what useful applications they could use in their homes. We will demonstrate Lab of Things, a connected device platform that helps users determine which applications will work with devices they already own and which applications (such as security, energy monitoring, or eldercare) they could enable by adding a few devices (for example, cameras, water sensors, and door locks). We will also display technology, including a monitoring portal that we have built to facilitate research deployments in homes to ease the burden for field deployments on researchers.
-
Presenters: Alex Wade, Roy Zimmermann, Michael Zyskowski, and Josh Barnes, Microsoft Research
Microsoft Research Connections seeks to improve the discovery and exploration of the vast body of research content, and to accelerate scientific discoveries and breakthroughs by fostering the dissemination of ideas throughout the research community. This demo fest should provide a window into our recent projects and ideas, but we also want to hear from you! Stop by and let us know how you discover research content and engage in the scholarly dialogue. What tools do you use? What is working great? What is lacking? We want to know!
-
Presenter: Dennis Fetterly, Microsoft Research, Silicon Valley Lab
We have ported the Dryad (opens in new tab) and DryadLINQ (opens in new tab) data parallel computing frameworks to YARN, which is the next generation Hadoop architecture. This enables developers who use Microsoft .NET to scale out data analytics computations to clusters of commodity computers while using the infrastructure provided by the Apache Hadoop project. DryadLINQ users are able to declaratively express their computation, which operates on data stored in the Hadoop distributed file system (HDFS), using familiar LINQ operators that can easily invoke preexisting managed or native code libraries. Dryad and DryadLINQ for Hadoop YARN will be available for download.
-
Presenter: Aparna Lakshmiratan, Microsoft Research
The web is filled with rich information and applications that aspire to use this information. Classifiers and extractors help identify meaningful text in unstructured documents and enrich it with metadata that can be consumed by applications. Building classifiers and entity extractors is not new. Bing, Google, and Yahoo have built hundreds in an attempt to understand queries, web pages, and ads. Unfortunately, the efficacy of the current approaches is limited by the number of machine learning experts, the number of programmers, and the complexity of the tasks.
The ICE platform enables interactive machine learning with large-scale data for non-experts. ICE makes it easy for non-engineers to create classifiers and extractors using large data sets (for example, 100 million webpages, 1 billion queries, 10 million images) in a matter of hours. The ICE system integrates and reinforces the principles of iterative and interactive machine learning with active labeling, active featuring, and automatic regularization. ICE offers a user interface that enables the user to view web pages and start to label them. As they label, the system creates a classifier and offers new examples for the user to verify. ICE also allows the user to create and customize concepts (for example, a list of food ingredients or a date extractor) to make the classifier robust and to build on each other’s work by using existing classifiers and extractors as features to new ones. The ability to produce thousands of high-quality classifiers and extractors can be valuable for applications like web and social search, advertising, email, and mobile.
While the current focus of ICE is on web data, the framework is general to support a variety of data types including text and images (for example, pictures, digits, queries, and web pages).
-
Presenters: Srikanth Kandula, Ratul Mahajan, and Ming Zhang, Microsoft Research
SWAN is a control system for inter-data center networks that uses the software defined network (SDN) paradigm to increases the network utilization from 40 percent (today’s level) to more than 95 percent. At the same time, it improves the fault tolerance and scale-out properties of the network by replacing big iron routers with arrays of commodity switches. These benefits translate to reduction in amortized annual cost of the network by tens of millions of dollars and multi-week reductions in the time to transfer large amounts of data across the WAN (wide area network).
-
Presenters: Judith Bishop, Christophe Poulain, and Evelyne Viegas, Microsoft Research Redmond; Don Syme and Kenji Takeda, Microsoft Research Cambridge
Try F# (opens in new tab) enables anyone, in a few minutes with no registration and no download, to work with topics as varied as statistics, machine learning, and charting and to apply them to data-rich problems. Try F# addresses the dual challenges of the continually growing web of linked data and open data resources being made available more broadly by government agencies, such as the World Bank, and communities, such as Freebase. Try F#, built on the Windows Azure platform and the information-rich paradigm of F# 3.0 type providers, enables anyone—from the data scientist to the analyst to the developer—to explore, split, dice, and query information-rich data sets and to gain easy visualization of the results to understand the value of the data. Moreover, anyone can write code in the browser and then share it with the rest of the community.
-
Presenter: Robert Gruen, Microsoft Research
Though the phrase “going viral” has permeated popular culture, the concept of virality itself is surprisingly elusive, with past work failing to define rigorously or even definitively show the existence of viral content. By examining nearly a billion information cascades on Twitter—involving the diffusion of news, videos, and photos—this project has developed a quantitative notion of virality for social media and, in turn, identified thousands of viral events.
ViralSearch lets users interactively explore the diffusion structure of popular content. After selecting a story, users can view a time-lapse video of how the story spread from one user to the next, identify which users were particularly influential in the process, and examine the chain of tweets along any path in the diffusion cascade. The science and technology behind ViralSearch can help identify topical experts, detect trending topics, and provide virality metrics for a variety of content.
-
Presenter: Curtis Wong, Microsoft Research
GeoFlow uses the architecture and ideas behind the WorldWide Telescope (opens in new tab) to deliver interactive spatial temporal data visualization and virtual cinematography in the 3-D data environment for business intelligence with small- to medium-sized data tables in Microsoft Excel 2013.
-
Presenters: Nikolai Tillmann, Michal Moskal, Peli de Halleux, Manuel Fahndrich, Sebastian Burckhardt, and Juan Chen, Microsoft Research
TouchDevelop (opens in new tab) is a modern development environment tuned for touchscreens. It runs on virtually all devices, including PC, Windows Phone, Mac, iPhone, iPad, and Android. TouchDevelop can be used in the classroom to teach programming concepts, and TouchDevelop is ideal for classes on mobile computing, as it cuts the time required to write apps. A free e-book helps you get started.
TouchDevelop features a live programming experience that makes it easy to design and implement modern user interfaces. A natural language processor turns spoken or typed free-text queries into type-correct code fragments. TouchDevelop can generate true apps that you can submit for certification in both the Windows Phone Store and the Windows 8 Store. Write code now with TouchDevelop (opens in new tab).
-
Presenters: Peter Scupelli and Bruce Hanington, Carnegie Mellon University; Noa Morag and Oren Zuckerman, Herzliya School of Communication, Israel; Bibhudutta Baral and Rupeshkumar Vyas, National Institute of Design, India; Clay Shirky, New York University; Trevor Duncan, Northumbria University; Berry Eggen, Technische Universiteit (TU) Eindhoven; Jorge Meza Aguilar, Universidad Iberoamericana; Christian Moeller, University of California at Los AngelesAxel Roeslerl, University of Washington
We live in a world that is increasingly alive with sensors and data. The big data, sensor network, and transparency movements have left us with a glut of potentially useful free data that is lying fallow. How can we use this to improve life, local community, and the world at large?
What are key problems this data can be used to help solve, what new troubles can we anticipate it creates?