GC3: An Optimizing Compiler for GPU Collective Communication
- Meghan Cowan ,
- Saeed Maleki ,
- Madan Musuvathi ,
- Olli Saarikivi ,
- Yifan Xiong
arXiv preprint
Machine learning models made up of millions or billions of parameters are often trained and served on large multi-GPU systems. As models grow in size and execute on more GPUs, the collective communications used in these applications becomes a bottleneck. Custom collective algorithms optimized for both particular network topologies and application specific communication patterns can alleviate this bottleneck and thus help these applications scale.
This paper introduces GC3, a system designed to make GPU communication programmable. GC3 provides a data oriented domain specific language for writing custom collective communication algorithms and an optimizing compiler for lowering them to an executable form, which can be executed efficiently and flexibly in an interpreter based runtime. We used GC3 to write novel collective implementations for AllReduce and AllToAll that are up to 48% and 20% faster than optimized vendor implementations, respectively. We also demonstrate how directly implementing an application specific collective called AllToNext in GC3 results in a 14.5 speedup over the baseline.