Serving DNNs like Clockwork: Performance Predictability from the Bottom Up
- Arpan Gujarati ,
- Reza Karimi ,
- Safya Alzayat ,
- Wei Hao ,
- Antoine Kaufmann ,
- Ymir Vigfusson ,
- Jonathan Mace
2020 Operating Systems Design and Implementation |
Published by USENIX Association
Distinguished Artifact Award
Download BibTexMachine learning inference is becoming a core building block for interactive web applications. As a result, the underlying model serving systems on which these applications depend must consistently meet low latency targets. Existing model serving architectures use well-known reactive techniques to alleviate common-case sources of latency, but cannot effectively curtail tail latency caused by unpredictable execution times. Yet the underlying execution times are not fundamentally unpredictable – on the contrary we observe that inference using Deep Neural Network (DNN) models has deterministic performance. Here, starting with the predictable execution times of individual DNN inferences, we adopt a principled design methodology to successively build a fully distributed model serving system that achieves predictable end-to-end performance. We evaluate our implementation, Clockwork, using production trace workloads, and show that Clockwork can support thousands of models while simultaneously meeting 100ms latency targets for 99.9999% of requests. We further demonstrate that Clockwork exploits predictable execution times to achieve tight request-level service-level objectives (SLOs) as well as a high degree of request-level performance isolation.