Scaling Distributed Machine Learning with In-Network Aggregation
- Amadeo Sapio ,
- Marco Canini ,
- Chen-Yu Ho ,
- Jacob Nelson ,
- Panos Kalnis ,
- Changhoon Kim ,
- Arvind Krishnamurthy ,
- Masoud Moshref ,
- Dan R. K. Ports ,
- Peter Richtarik
MSR-TR-2019-9 |
Published by KAUST
Training complex machine learning models in parallel is an increasingly important workload. We accelerate distributed parallel training by designing a communication primitive that uses a programmable switch dataplane to execute a key step of the training process. Our approach, SwitchML, reduces the volume of exchanged data by aggregating the model updates from multiple workers in the network. We co-design the switch processing with the end-host protocols and ML frameworks to provide a robust, efficient solution that speeds up training by up to 300%, and at least by 20% for a number of real-world benchmark models.