[edit]
Near-Optimal Bounds for Cross-Validation via Loss Stability
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):27-35, 2013.
Abstract
Multi-fold cross-validation is an established practice to estimate the error rate of a learning algorithm. Quantifying the variance reduction gains due to cross-validation has been challenging due to the inherent correlations introduced by the folds. In this work we introduce a new and weak measure of stability called \emphloss stability and relate the cross-validation performance to loss stability; we also establish that this relationship is near-optimal. Our work thus quantitatively improves the current best bounds on cross-validation.