-
Presenters: Lucas Joppa, Dan Morris
Microsoft’s AI for Earth program was created to fundamentally change the way that society monitors, models, and ultimately manages Earth’s natural resources. Computer vision algorithms provide major advances towards that ambition, allowing the detection and classification of thousands of species from photos and videos. Integrating these algorithms into popular applications allows everyone, from professional scientists to casual hobbyists, to contribute to a massively growing portfolio of biodiversity observations, at a planetary scale.
-
Presenters: Manuel Costa, Olya Ohrimenko
Azure Confidential Computing (ACC) provides strong security and privacy guarantees: customers can upload encrypted code and data, and then receive encrypted results with the guarantee that no one can see the customers’ secrets. This guarantee holds even in the presence of malicious insiders with administrative privileges or malware that exploits bugs in the operating system or the hypervisor. Our research in this space ranges from designing hardware-based isolation technology for code and data, to refactoring existing services to make use of these hardware capabilities and creating new services. We will demonstrate our work on confidential AI, including multi-party machine learning and confidential consortium-based blockchains, which provide strong privacy guarantees, high throughput, and low latency.
-
Presenters: Felix Schuster, Manuel Costa, Olya Ohrimenko, Sylvan Clebsch, Christoph Wintersteiger
Current blockchains feature low performance, allow everyone to see the full history of transactions, and do not provide a means to evolve their governance rules. COCO uses a permissioned consortium model and trusted hardware modules to address these limitations. It achieves high throughput (100x larger than currently deployed blockchains) and full confidentiality of data, chaincode, and transaction history.
-
Presenters: Kapil Vaswani, Manuel Costa, Olya Ohrimenko, Stavros Volos
[Video]
This project proposes a combination of new secure hardware for acceleration of machine learning (including custom silicon and GPUs), and cryptographic techniques to limit or eliminate information leakage in multi-party AI scenarios. This project is designed to address the privacy and security risks inherent in sharing data sets in the sensitive financial, healthcare, and public sectors.
-
Presenters: John Azariah, Chris Granade, Martin Roetteler
[Video]
When realized, quantum computing will make a giant leap forward from today’s technology—one that will forever alter our economic, industrial, academic, and societal landscapes. This has massive implications for all industries, including healthcare, energy, environmental systems, smart materials, and more. Microsoft is taking a unique revolutionary approach to quantum computing and offers a way to get started developing quantum solutions with its Q# and Quantum Development Kit.
-
Presenters: Dushyanth Narayanan, Aleksandar Dragojevic, Alex Shamis, Junyi Liu
[Video]
FaRM is an in-memory transactional object store for random-access latency-sensitive workloads, such as graphs and key-value stores. It provides high throughput and low microsecond latency by using remote direct memory access (RDMA) and novel transactional protocols. In addition to presenting FaRM, we have built A1, a graph store built on top of FaRM, and which is being adopted as the next generation graph platform targeting several scenarios in Bing, Office, and Cortana.
-
Presenters: Ranveer Chandra, Zerina Kapetanovic, Suraj Jog
[Video]
Data-driven techniques help boost agricultural productivity by increasing yields, reducing losses and cutting down input costs. However, these techniques have seen sparse adoption owing to high costs of manual data collection and limited connectivity solutions. Our solution, called FarmBeats, is an end-to-end IoT platform for agriculture that enables seamless data collection from various sensors, cameras, and drones. Our system design explicitly accounts for weather-related power and Internet outages, and has been enabled in six-month-long deployments at two US farms.
-
Presenters: Yibo Zhu, Marina Lipshteyn
[Video]
We propose a platform to validate networks with high-fidelity emulations. On a computing cluster such as Azure, we run the full stack (including real operating system, software, and chip emulators) of each network device in a virtualized sandbox (a virtual machine or a container). We interconnect the sandboxes of the network devices with virtual network links in the same way as in production and load real configurations onto them. The network operator can directly log into emulated devices and carry out tests, or we can extract the routing information and analyze it logically by using a logic solver such as Z3. The key advantage is high fidelity: it directly verifies the network status and behaviors with the same software and configurations as in production, so that it can produce much more trustworthy validation results to network developers. We have shown an early prototype to Azure network engineers, to enthusiastic response.
-
Presenters: Ashish Kapoor, Shital Shah
[Video]
Developing and testing real-world AI is an expensive and time-consuming process. We need to collect a large amount of annotated training data in a variety of conditions and environments, and such data-driven systems can result in failure cases that can jeopardize safety. In this session, we will explore how high-fidelity simulations can help us alleviate some of these problems. We will discuss how near-realistic simulations can help not only with gathering training data, but also can be embedded in imitation-learning or reinforcement learning loops to improve sample complexity. Our discussion will center around AirSim, an open-source simulator built on Unreal Engine that offers physically and visually realistic simulations.
-
Presenter: Jilong Xue
[Video]
Open platform for AI (Open PAI) is an open-source platform for GPU cluster management and resource scheduling. PAI provides runtime environment support, GPU scheduling, and supports debugging, log collection, and port management. PAI is designed to significantly lower the operation overhead on a GPU cluster and improve the productivity of AI researchers. PAI embraces a microservices architecture: every component runs in a container and exposes an explicit service endpoint to other components. The flexibility and modular design of PAI make it an attractive platform to evaluate various research ideas.
-
Presenters: Hitesh Ballani, Paolo Costa, Mark Filer, Jamie Gaudette, Christos Gkantsidis, Thomas Karagiannis, Kai Shi, Benn Thomsen
[Video]
A reliable, performant, and low-cost network is a critical part of the cloud infrastructure. Our research aims to re-invent the next-generation network via emerging optical technologies. This is challenging yet timely because network capacity and speed requirements are expected to increase at the same time as prevalent network technologies become hard to scale. We will demonstrate new low-power and low-cost transceivers developed at Microsoft that are currently being deployed in our data centers, and ongoing research towards new switching systems and network architectures that could, in the future, enable new cloud applications and scenarios.
-
Presenters: Dan Bohus, Ashley Feniello, Sean Andrist, Debadeepta Dey, Mihai Jalobeanu
[Video]
We demonstrate Platform for Situated Intelligence, an open-source, extensible framework that enables the development, fielding, and study of situated, integrative-AI systems. The framework aims to address the challenges of developing multimodal, integrative systems that harness multiple AI technologies, and provides a basis for studying AI techniques for automatic tuning and optimization in such systems.
-
Presenters: Kim Laine, Sreekanth Kannepalli, Kristin Lauter
[Video]
Microsoft has established itself as a pioneer in trustworthy computing. Features such as Always Encrypted in SQL Server show how we continue to keep trust while making world-class products. We imagine a world where user privacy is preserved by using state-of-the-art cryptography while enabling better AI and data analytics products. In this talk, we will discuss how homomorphic encryption, a new encryption technology, is making all this possible. We will also show how this technology is being built into existing data analytics/ML/AI frameworks.
-
Presenter: Amar Phanishayee
[Video]
The goal of Project Fiddle is to build systems infrastructure to systematically speed-up distributed deep neural network (DNN) training while eking out the most from the resources used. Specifically, we are aiming for 100x more efficient training. To achieve this goal, we take a broad view of training: from a single GPU, to multiple GPUs on a machine, all the way to multiple machines in a cluster. Our innovations cut across the systems stack from the memory subsystem, to structuring parallel computation, and interconnects between GPUs and machines. Our work has generated interest and led to collaborations with product groups such as Cognitive Toolkit and Cloud Server Infrastructure.
-
Presenters: Quentin Miller, Michael Bleyer, Andrew Duan
[Video]
We present a prototype of time-of-flight depth-sensing technology, which will be adopted in Project Kinect for Azure as well as in the next-generation of HoloLens. This depth sensor outperforms the current state-of-the-art in terms of depth precision, while maintaining both a small-form factor and high-power efficiency. The depth sensor supports various depth ranges, frame rates, and image resolutions. The user can choose between large and medium field-of-view modes. Our depth technology is used for real-time interaction scenarios such as hand or skeleton tracking and enables high-fidelity spatial mapping. It also empowers researchers and developers to build new scenarios for working with ambient intelligence using Azure AI.
-
Presenter: Dan Ports
[Video]
The modern datacenter fabric increasingly includes programmable devices capable of increasingly sophisticated processing – ranging from programmable switches that support in-network computation to network-attached FPGAs and smart storage devices. This project asks how we can leverage this new capability to build a new generation of storage and data processing systems that achieve dramatically better performance, reliability, and efficiency. The key idea is to co-design distributed systems with the data center hardware: introducing new low-level primitives and building systems around them. We have already used this approach to achieve order-of-magnitude speedups for replicated services and transaction coordination. Future directions include: a scalable analytics system that handles massive volumes of network telemetry data by moving the first-level filtering and aggregation to programmable network devices; in-network aggregation for faster training of deep neural networks; and replicated remote storage with minimal CPU overhead by using network-attached programmable solid-state drives.
-
Presenter: David Baumert
[Video]
We demonstrate a robot to assist (not replace) a human receptionist. Our design philosophy is based on Reddy’s 90 percent AI principle for human and robot cooperation: 90 percent of routine tasks are handled by a robot, with the remaining 10 percent of exceptional tasks are handled by a human. The receptionist is the leader and the robot is a follower; the robot works only under the orders of the leader. Using this principle, we have designed and implemented functionalities for this assistant robot, such as a knowledge engine to augment the receptionist’s ability to answer questions, a situation engine to provide awareness of the receptionist’s needs, and a conversation engine for verbal interaction with the receptionist and customers.
-
Presenters: Ricardo Bianchini, Anand Bonde
[Video]
This project is exploring ways to create intelligent and efficient cloud platforms by leveraging machine learning (ML) techniques. Our first foray in this direction has produced Resource Central (RC), an ML and prediction-serving system that provides intelligence to the various resource managers in a cloud platform. Specifically, RC collects telemetry from virtual machines and servers, learns from their prior behaviors and, when requested, produces predictions of their future behaviors. RC is currently in production within the Azure Compute Fabric. Our ongoing research effort is exploring other architectures and approaches to integrating ML into cloud systems. In particular, we are currently investigating how to broaden the set of management tasks that RC can support. Our poster will highlight our vision for intelligent and efficient cloud platforms, describe potential approaches for integrating ML, and provide an overview of RC and its initial results.
-
Presenters: Galen Hunt, Danielle Damasius
The industry largely underestimates the societal need to embody the highest levels of security in every network-connected device—every child’s toy, every household’s appliances, and every company’s equipment. High development and maintenance costs have limited the application of strong security to high-cost or high-margin devices.
We present research about bringing high-value security to low-cost devices. We are especially concerned with the tens of billions of devices powered by microcontrollers; a class of devices ill-prepared for the security challenges of internet connectivity. We will outline the seven properties required in all highly secure devices and describe our experiments to translate these properties for microcontroller-based devices.