Multiple Kernel Learning and the SMO Algorithm
- S. V. N. Vishwanathan ,
- Z. Sun ,
- N. Theera-Ampornpunt ,
- M. Varma ,
- Manik Varma
Advances in Neural Information Processing Systems |
Our objective is to train $p$-norm Multiple Kernel Learning (MKL) and,
more generally, linear MKL regularised by the Bregman divergence,
using the Sequential Minimal Optimization (SMO) algorithm. The SMO
algorithm is simple, easy to implement and adapt, and efficiently
scales to large problems. As a result, it has gained widespread
acceptance and SVMs are routinely trained using SMO in diverse real
world applications. Training using SMO has been a long standing goal
in MKL for the very same reasons. Unfortunately, the standard MKL dual
is not differentiable, and therefore can not be optimised using SMO
style co-ordinate ascent. In this paper, we demonstrate that linear
MKL regularised with the $p$-norm squared, or with certain Bregman
divergences, can indeed be trained using SMO. The resulting algorithm
retains both simplicity and efficiency and is significantly faster
than state-of-the-art specialised $p$-norm MKL solvers. We show that
we can train on a hundred thousand kernels in approximately seven
minutes and on fifty thousand points in less than half an hour on a
single core.