Latin American Faculty Summit 2016
May 18, 2016 - May 20, 2016

Latin American Faculty Summit 2016

Location: Rio de Janeiro, Brazil

Demo Madness

  • Presenter: Ivan Tashev & David Johnston

    The exhibit demonstrates the advantages that spatial audio can provide for augmented and virtual reality scenarios, such as gaming, entertainment, and virtual presence. While human vision has a limited field of view (which is further restricted by the device itself), humans can hear and locate sound sources coming from 360 degrees (actually 4 pi steradians). We will demonstrate the abilities of spatial audio to complement vision and enhance the overall experience for AV/VR users. During the demo, attendees can wear an AR/VR device and play a short interactive game with spatial sound, vision, gesture, and voice, or look and listen around selected places where we have recorded 3D video and audio.

  • Presenter: Alex Wade

    We demonstrate a real-time system to recognize people’s emotion in a crowd. Such a system can be very useful in advertisement, education, medical applications, etc.

  • Presenter: Geoff Zweig

    Microsoft Cognitive Services let you build apps with powerful algorithms using just a few lines of code. They work across devices and platforms such as iOS, Android, and Windows, keep improving, and are easy to set up. These new APIs span areas of Vision, Speech, Language, Knowledge, and Search. The APIs in the Knowledge area enable developers to build semantic search features into their applications based upon custom content and domain-specific grammars. Come learn how to leverage the Entity Linking Intelligent Service (ELIS) to recognize and identify each separate entity in your text based on the context. The Knowledge Exploration Service (KES) can be used to add semantic search capabilities to your applications using data, schema, and domain-specific grammars defined by you.

  • Presenter: Cha Zhang

    We demonstrate a real-time system to recognize people’s emotion in a crowd. Such a system can be very useful in advertisement, education, medical applications, etc.

  • Presenter: Will Lewis

    Microsoft Translator builds on decades of natural language processing, machine learning and deep learning to help break down language barriers. Its phone apps allows you to translate text, images and even full conversations. Microsoft Translator’s speech translation service enables Skype users, through Skype Translator, to converse using their native language with other Skype users speaking in theirs. Further, the Microsoft Translator API exposes the text and speech translation features to anyone interested in building tools or apps that need text and speech translation. Wherever you use Microsoft Translator, thanks to the power of machine learning, it will continue to improve over time as more people use it across apps, services and devices.

  • Presenter: Dave Brown

    NUIgraph is a prototype Windows 10 app for visually exploring data in order to discover and share insight. The app has been designed for touch interaction, however a mouse can also be used. Data can be loaded from .csv files (exported from Excel). Once loaded, each row in the data is represented by a block on the screen. Blocks can be flexibly mapped to position, color and size using each column in the data, or arranged into stacks. In this way, multi-dimensional data can be explored to find patterns, which may lead to new insights.

  • Presenter: Katja Hofmann

    Project Malmo allows computer scientists to use the world of Minecraft as a testing ground for conducting research designed to improve artificial intelligence.

  • Presenter: Mike Zyskowski

    Project Premonition seeks to detect pathogens in animals before these pathogens make people sick. It does this by treating a mosquito as a device that can find animals and sample their blood. Project Premonition uses drones and new robotic mosquito traps to capture many more mosquitoes from the environment than previously possible, and then analyzes their body contents for pathogens. Pathogens are detected by gene sequencing collected mosquitoes and computationally searching for known and unknown pathogens in sequenced genetic material.

  • Presenters: Sergio Lima Netto with Lucas Maia and José Fernando L. de Oliveira

    Two groups from the Signal, Multimedia, and Telecommunication Lab will demonstrate their most recent research.

    The Audio Processing Group, will demonstrate applications related to its main research interest, including: audio signal modelling; automatic audio quality assessment; automatic music transcription; music information retrieval; sound source/sensor localization; sound source separation; audio coding; audio restoration; digital audio effects; singing voice processing; algorithmic music composition; and binaural generation of 3D sound.

    The Image Processing Group will demonstrate a system for the detection of abandoned objects using a moving camera. The system operates on an industrial environment and compares a reference signal previously validated by the system operator with the newly acquired (target) video. Anomalies (object detection) are associated with image discrepancies in consecutive video frames. Solutions are proposed/discussed for the system to operate in real time.

  • Presenter: Jonathan Protzenko

    The BBC micro:bit is a small programmable device half the size of a credit card; it features 25 LEDs, buttons, an accelerometer, a compass, and Bluetooth capabilities. The device has been handed out for free to a million kids between 11 and 12 years old in the UK; Microsoft provided the programming environment, based on TouchDevelop. I will talk about the device, demo the programming environment, and discuss the global “CS literacy” trend, wherein more and more countries emphasize CS education.

  • Presenter: Carlos Garcia Jurado Suarez

    Building classifiers and entity extractors is not new. The efficacy of current approaches, though, is limited by the number of machine-learning experts and programmers and by the complexity of the tasks. The Platform for Interactive Concept Learning (PICL) enables interactive, iterative machine learning with big data for non-experts. PICL makes it easy to build classifiers and extractors in hours. Users can build a classifier or extractor by labeling a few examples, adding features, and verifying system predictions. The ability to produce thousands of high-quality classifiers and extractors can be valuable for applications such as search, advertising, email, and mobile.

  • Presenter: Leonardo Nunes

    Real-time event detection in video streams will be shown for different scenarios, including urban mobility and public safety. The demonstration will highlight the lightweight aspect of the solutions proposed as well as their capability to run as an Azure service for several video streams.

  • Presenter: Ricardo Sabedra and Witallo Oliveira

    The EchoSense project is a wearable device made to assist with mobility and sense development for people with visual impairments. This project was chosen as one of the Brazilian national finalists in the Innovation category of the Microsoft ImagineCup. This device uses sensors and vibration motors to provide tactile information to users, enabling them to perceive with precision where obstacles are, without the need to touch them.

  • Presenter: Jon Campbell

    The MSR Enable group focuses on creating technologies to help restore capabilities to people living with disabilities. With a specific focus on ALS (also known as MND, or Lou Gehrig’s Disease), our team is producing advancements in the areas of natural communication and independent mobility.

  • Presenter: Judith Bishop

    Open source is a powerful way of advancing software development. Microsoft has open sourced over fifty cutting-edge research projects as well as key software such as .NET, and has classified it especially for academics. Many of the projects are cross-platform and also run in browser versions. We’ll demonstrate how to get to the software best suited for your research and teaching needs.

  • Presenter: Sunayana Sitaram

    Code-mixing is the alteration between two or more languages at the sentence, phrase, word, or morpheme level and is prevalent in multilingual societies all over the world. We demonstrate a system for the machine translation of code-mixed text in several languages. We first perform word-level language detection and matrix language identification. We then use this information and an existing translator in order to translate code-mixed tweets into a language of the user’s choice.

  • Presenter: Jonathan Taylor

    How would truly robust and accurate hand-tracking technology transform the way we interact with our devices? Take a glimpse of such a future through a number of exciting new user experiences. See your hands appear as avatars, allowing you to play a virtual piano or interact with virtual objects as if they were physical.