Towards Memory-Efficient Inference in Edge Video Analytics
- Arthi Padmanabhan ,
- Anand Iyer ,
- Ganesh Ananthanarayanan ,
- Yuanchao Shu ,
- Nikolaos Karianakis ,
- Guoqing Harry Xu ,
- Ravi Netravali
Workshop on Hot Topics in Video Analytics and Intelligent Edges (HotEdgeVideo) |
Video analytics pipelines incorporate on-premise edge servers to lower analysis latency, ensure privacy, and reduce bandwidth requirements. However, compared to the cloud, edge servers typically have lower processing power and GPU memory, limiting the number of video streams that they can manage and analyze. Existing solutions for memory management, such as swapping models in and out of GPU, having a common model stem, or compression and quantization to reduce the model size incur high overheads and often provide limited benefits. In this paper, we propose model merging as an approach towards memory management at the edge. This proposal is based on our observation that models at the edge share common layers, and that merging these common layers across models can result in significant memory savings. Our preliminary evaluation indicates that such an approach could result in up to 75% savings in the memory requirements. We conclude by discussing several challenges involved with realizing the model merging vision.