Beating the Holdout: Bounds for K-Fold and Progressive Cross-Validation

COLT '99 Proceedings of the twelfth annual conference on Computational learning theory |

Published by ACM Press

Publication

K-Fold cross-validation is a popular technique in machine learning for estimating the performance of a learned hypothesis on a data set. We provide the first theoretical justification for this method that shows that it is, on average, more accurate than a held-out test set of comparable size.