Data-dependent generalization performance assessment via quasiconvex optimization

2008 International Workshop on Machine Learning for Signal Processing |

Published by IEEE

Publication

As compared to classical distribution-independent bounds based on the VC dimension, recent data-dependent bounds based on Rademacher complexity yield tighter upper bounds that may offer practical utility for model selection, as suggested by several investigations. We present an approach for kernel machine learning and generalization performance assessment that integrates concepts from prior work on Rademacher-type data-dependent generalization bounds and learning based on the optimization of quasiconvex losses. Our main contribution focuses on the direct estimation of the Rademacher penalty in order to obtain a tighter generalization bound. Specifically we define the optimization task for the case of learning with the ramp loss and show that direct estimation of the Rademacher penalty can be accomplished by solving a series of quadratic programming problems.