Deep Learning Inference Service at Microsoft
- Jonathan Soifer ,
- Jason Li ,
- Mingqin Li ,
- Jeffrey Zhu ,
- Yingnan Li ,
- Yuxiong He ,
- Elton Zheng ,
- Adi Oltean ,
- Maya Mosyak ,
- Chris Barnes ,
- Thomas Liu ,
- Junhua Wang
2019 USENIX Conference on Operational Machine Learning (OpML ’19) |
This paper introduces the Deep Learning Inference Service, an online production service at Microsoft for ultra-low-latency deep neural network model inference. We present the system architecture and deep dive into core concepts such as intelligent model placement, heterogeneous resource management, resource isolation, and efficient routing. We also present production scale and performance numbers.