Microsoft Research bringing its best to SIGCOMM 2018

Published

By , Researcher

Microsoft Research is actively developing technologies as we continually strive to make our network and online services the most performant and efficient on the planet and this includes openly sharing our progress in advancing the state of the art with the research community. At the upcoming SIGCOMM 2018 (opens in new tab) – the annual flagship conference organized by Association for Computing Machinery’s Special Interest Group on Data Communications, held this year in Budapest, August 20-24 – Microsoft Research will present multiple papers that cover a wide spectrum of its network, from data center networks, to wide area networks, to the large-scale service that analyzes video streams. We are particularly proud of our accomplishments this year and looking forward to sharing our knowledge and experience with you in person.

This post previews several papers that we will be presenting at SIGCOMM.

Low-latency Storage System Enabled by Our Novel RDMA Networks

About Microsoft Research

Advancing science and technology to benefit humanity

Let’s begin with data center networks. Storage systems in data centers are an important component of large-scale online services. They typically perform replicated transactional operations for high data availability and integrity. Today, however, such operations suffer from high tail latency even with recent kernel bypass and storage optimizations and thus affect the predictability of end-to-end performance of these services. We observe that the root cause of the problem is the involvement of the CPU, a precious commodity in multi-tenant settings, in the critical path of replicated transactions.

In our paper, Group-Based NIC-Offloading to Accelerate Replicated Transactions in Multi-Tenant Storage Systems (opens in new tab), we present a new framework that removes the CPU from the critical path of replicated transactions in storage systems by offloading them to commodity RDMA NICs with non-volatile memory as the storage medium. To achieve this, we develop new and general NIC offloading primitives that can perform memory operations on all nodes in a replication group while guaranteeing ACID properties without CPU involvement. We demonstrate that popular storage applications can be easily optimized using our primitives. Our evaluation results show that HyperLoop can reduce 99th percentile latency ≈ 800× with close to 0% CPU consumption on replicas.

Figure 1: Hyperloop reduces 99th percentile latency of group-based storage replication by up to 800x.

Effective Traffic Engineering for Optical Links in Wide Area Network

Outside data centers, Microsoft researchers also have achieved a breakthrough. Fiber optic cables connecting different data centers are an expensive resource acquired by large organizations with significant monetary investment. Their importance has driven a conservative deployment approach with redundancy and reliability baked in at multiple layers.

In our paper RADWAN: Rate Adaptive Wide Area Network (opens in new tab), we take a more aggressive approach and argue for adapting the capacity of fiber optic links based on their signal-to-noise ratio (SNR). We investigated this idea by analyzing the SNR of over 2,000 links in an optical backbone for a period of three years. We were able to show that the capacity of 64% of IP links can be augmented by at least 75 Gbps, leading to an overall capacity gain of over 134 Tbps. Moreover, adapting link capacity to a lower rate can prevent 25% of link failures. This means using the same links, we get higher capacity and better availability.

We proposed RADWAN, a traffic engineering system that allows optical links to adapt their rate based on the observed SNR to achieve higher throughput and availability while minimizing the churn during capacity reconfigurations. We evaluated RADWAN using a testbed consisting of 1,540 km fiber with 16 amplifiers and attenuators. We then simulated the throughput gains of RADWAN at scale compared to the state-of-the-art. Our results show that with realistic traffic matrices and conservative churn control, RADWAN improves the overall network throughput by 40%. The service provider we studied has invested in this idea and is rolling out the necessary infrastructure to deploy the first capacity variable link between Canada and Europe this year.
Scalable Deep Neural Network Adaption for Video Analytics

Finally, we will share our experience in applying deep convolutional neural networks (NN) to video data at scale. This poses a substantial systems challenge, as improving inference accuracy often requires a prohibitive cost in computational resources. While it is promising to balance resource and accuracy by selecting a suitable NN configuration (for example, the resolution and frame rate of the input video), one must also address the significant dynamics of the NN configuration’s impact on video analytics accuracy.

In our paper Scalable Adaptation of Video Analytics (opens in new tab), we present Chameleon, a controller that dynamically picks the best configurations for existing NN-based video analytics pipelines. The key challenge in Chameleon is that in theory, adapting configurations frequently can reduce resource consumption with little degradation in accuracy, but searching a large space of configurations periodically incurs an overwhelming resource overhead that negates the gains of adaptation. The insight behind Chameleon is that the underlying characteristics (for example, the velocity and sizes of objects) that affect the best configuration have enough temporal and spatial correlation to allow the search cost to be amortized over time and across multiple video feeds. For example, using the video feeds of five traffic cameras, we demonstrate that compared to a baseline that picks a single optimal configuration offline, Chameleon can achieve 20-50% higher accuracy with the same amount of resources, or achieve the same accuracy with only 30-50% of the resources (a 2-3× speedup).

Figure 2: Chameleon (red) consistently outperforms the baseline across different metrics. The graphs also include 1-σ ellipses to mark the performance variance of each solution.

Just a glimpse of a few exciting stories we are looking forward to sharing with you in more detail. See you at SIGCOMM 2018!

Related publications

Continue reading

See all blog posts