Efficient optimal learning for contextual bandits
- Nikos Karampatziakis ,
- John Langford ,
- Miroslav Dudik ,
- Lev Reyzin ,
- Tong Zhang ,
- Miro Dudík
Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence (UAI-11) |
We address the problem of learning in an online setting where the learner repeatedly observes features, selects among a set of actions, and receives reward for the action taken. We provide the first efficient algorithm with an optimal regret. Our algorithm uses a cost sensitive classification learner as an oracle and has a running time polylog(N), where N is the number of classification rules among which the oracle might choose. This is exponentially faster than all previous algorithms that achieve optimal regret in this setting. Our formulation also enables us to create an algorithm with regret that is additive rather than multiplicative in feedback delay as in all previous work.