Deep Learning Compiler and Optimizer

Project Overview

This project aims to build a deep learning compiler and optimizer infrastructure that can provide automatic scalability and efficiency optimization for distributed and local execution.  Overall, this stack covers two types of general optimizations: fast distributed training over large-scale servers and efficient local execution on various hardware devices.  Currently, our optimizations focus on many different parts of the system stack, such as fast distributed training over RDMA, automatic computation placement across devices, automatic operator batching and kernel fusion, tensor algebra compiler, sparse and quantization optimizations, and so on.

graphical user interface, application

Open-source Release

Some of our projects have been open-sourced, and welcome to try, contribute and collaborate with us.

Job Opportunity

 

 

Personne

Portrait de Jilong Xue

Jilong Xue

Principal Researcher/ Research Manager

Portrait de Lingxiao Ma

Lingxiao Ma

Senior Researcher

Portrait de Youshan Miao

Youshan Miao

Senior Researcher

Portrait de Wenxiang Hu

Wenxiang Hu

Senior RSDE

Portrait de Wei Cui

Wei Cui

Senior Researcher

Portrait de Fan Yang

Fan Yang

Sr. Principal Research Manager

Portrait de Lidong Zhou

Lidong Zhou

Corporate Vice President, Chief Scientist of Microsoft Asia Pacific R&D Group, Managing Director of Microsoft Research Asia