Optimizing Network Performance in Distributed Machine Learning
- Luo Mai ,
- Chuntao Hong ,
- Paolo Costa
7th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud'15) |
Published by USENIX - Advanced Computing Systems Association
To cope with the ever growing availability of training data, there have been several proposals to scale machine learning computation beyond a single server and distribute it across a cluster. While this enables reducing the training time, the observed speed up is often limited by network bottlenecks.
To address this, we design MLNet, a host-based communication layer that aims to improve the network performance of distributed machine learning systems. This is achieved through a combination of traffic reduction techniques (to diminish network load in the core and at the edges) and traffic management (to reduce average training time). A key feature of MLNet is its compatibility with existing hardware and software infrastructure so it can be immediately deployed.
We describe the main techniques underpinning MLNet and show through simulation that the overall training time can be reduced by up to 78%. While preliminary, our results indicate the critical role played by the network and the benefits of introducing a new communication layer to increase the performance of distributed machine learning systems.