[edit]
Don’t Waste Your Time: Early Stopping Cross-Validation
Proceedings of the Third International Conference on Automated Machine Learning, PMLR 256:9/1-31, 2024.
Abstract
State-of-the-art automated machine learning systems for tabular data often employ cross-validation; ensuring that measured performances generalize to unseen data, or that subsequent ensembling does not overfit. However, using k-fold cross-validation instead of holdout validation drastically increases the computational cost of validating a single configuration. While ensuring better generalization and, by extension, better performance, the additional cost is often prohibitive for effective model selection within a time budget. We aim to make model selection with cross-validation more effective. Therefore, we study early stopping the process of cross-validation during model selection. We investigate the impact of early stopping on random search for two algorithms, MLP and random forest, across 36 classification datasets. We further analyze the impact of the number of folds by considering 3-, 5-, and 10-folds. In addition, we ablate the impact of early stopping on Bayesian optimization and also repeated cross-validation. Our exploratory study shows that even a simple-to-understand and easy-to-implement method consistently allows model selection to converge faster; in ${\sim}$94% of all datasets, on average by 214%. Moreover, stopping cross-validation enables model selection to explore the search space more exhaustively by considering +167% configurations on average, while also obtaining better overall performance.