In this project, we have introduced a series of technologies, including DCQCN congestion control and DSCP-based PFC, and addressed a set of challenges including PFC deadlock, RDMA transport livelock, PFC pause frame storm, slow-receiver symptom, to make RDMA scalable and safe, and to enable RDMA deployable in production at large scale. We currently are working on RDMA deadlock understanding and prevention, and RDMA support for future AI infrastructure.
RDMA Congestion Control
Modern datacenter applications demand high throughput (40Gbps) and ultra-low latency (< 10 µs per hop) from the network, with low CPU overhead. Standard TCP/IP stacks cannot meet these requirements, but Remote Direct Memory Access(RDMA) can. On IP-routed data center networks, RDMA is deployed using RoCEv2 protocol, which relies on Priority-based Flow Control (PFC) to enable a drop-free network. However, PFC can lead to poor application performance due to problems like head-of-line blocking and unfairness. To alleviates these problems, we introduce DCQCN, an end-to-end congestion control scheme for RoCEv2. To optimize DCQCN performance, we build a fluid model, and provide guidelines for tuning switch buffer thresholds, and other protocol parameters. Using a 3-tier Clos network testbed, we show that DCQCN dramatically improves throughput and fairness of RoCEv2 RDMA traffic. DCQCN is implemented in Mellanox NICs, and is being deployed in Microsoft’s datacenters.
Experiences Running RoCEv2 at Scale
Over the past one and half years, we have been using RDMA over commodity Ethernet (RoCEv2) to support some of Microsoft’s highly-reliable, latency-sensitive services. This paper describes the challenges we encountered during the process and the solutions we devised to address them. In order to scale RoCEv2 beyond VLAN, we have designed a DSCP-based priority flow-control (PFC) mechanism to ensure large-scale deployment. We have addressed the safety challenges brought by PFC-induced deadlock (yes, it happened!), RDMA transport livelock, and the NIC PFC pause frame storm problem. We have also built the monitoring and management systems to make sure RDMA works as expected. Our experiences show that the safety and scalability issues of running RoCEv2 at scale can all be addressed, and RDMA can replace TCP for intra data center communications and achieve low latency, low CPU overhead, and high throughput.
Deadlocks in Lossless Networks
Driven by the need for ultra-low latency, high throughput and low CPU overhead, Remote Direct Memory Access (RDMA) is being deployed by many cloud providers. To deploy RDMA in Ethernet networks, Priority-based Flow Control (PFC) must be used. PFC, however, makes Ethernet networks prone to deadlocks. Prior work on deadlock avoidance has focused on {\em necessary} condition for deadlock formation, which leads to rather onerous and expensive solutions for deadlock avoidance. In this paper, we investigate {\em sufficient} conditions for deadlock formation, conjecturing that avoiding {\em sufficient} conditions might be less onerous.
Personne
Jitu Padhye
Partner Development Lead