Parallel Online Learning
- Daniel Hsu ,
- Nikos Karampatziakis ,
- John Langford ,
- Alex J. Smola
https://arxiv.org/abs/1103.4204
In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sharding approach that present various tradeoffs between delay, degree of parallelism, representation power and empirical performance.