Don’t Waste Your Time: Early Stopping Cross-Validation

Edward Bergman, Lennart Purucker, Frank Hutter
Proceedings of the Third International Conference on Automated Machine Learning, PMLR 256:9/1-31, 2024.

Abstract

State-of-the-art automated machine learning systems for tabular data often employ cross-validation; ensuring that measured performances generalize to unseen data, or that subsequent ensembling does not overfit. However, using k-fold cross-validation instead of holdout validation drastically increases the computational cost of validating a single configuration. While ensuring better generalization and, by extension, better performance, the additional cost is often prohibitive for effective model selection within a time budget. We aim to make model selection with cross-validation more effective. Therefore, we study early stopping the process of cross-validation during model selection. We investigate the impact of early stopping on random search for two algorithms, MLP and random forest, across 36 classification datasets. We further analyze the impact of the number of folds by considering 3-, 5-, and 10-folds. In addition, we ablate the impact of early stopping on Bayesian optimization and also repeated cross-validation. Our exploratory study shows that even a simple-to-understand and easy-to-implement method consistently allows model selection to converge faster; in ${\sim}$94% of all datasets, on average by 214%. Moreover, stopping cross-validation enables model selection to explore the search space more exhaustively by considering +167% configurations on average, while also obtaining better overall performance.

Cite this Paper


BibTeX
@InProceedings{pmlr-v256-bergman24a, title = {Don’t Waste Your Time: Early Stopping Cross-Validation}, author = {Bergman, Edward and Purucker, Lennart and Hutter, Frank}, booktitle = {Proceedings of the Third International Conference on Automated Machine Learning}, pages = {9/1--31}, year = {2024}, editor = {Eggensperger, Katharina and Garnett, Roman and Vanschoren, Joaquin and Lindauer, Marius and Gardner, Jacob R.}, volume = {256}, series = {Proceedings of Machine Learning Research}, month = {09--12 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v256/main/assets/bergman24a/bergman24a.pdf}, url = {https://proceedings.mlr.press/v256/bergman24a.html}, abstract = {State-of-the-art automated machine learning systems for tabular data often employ cross-validation; ensuring that measured performances generalize to unseen data, or that subsequent ensembling does not overfit. However, using k-fold cross-validation instead of holdout validation drastically increases the computational cost of validating a single configuration. While ensuring better generalization and, by extension, better performance, the additional cost is often prohibitive for effective model selection within a time budget. We aim to make model selection with cross-validation more effective. Therefore, we study early stopping the process of cross-validation during model selection. We investigate the impact of early stopping on random search for two algorithms, MLP and random forest, across 36 classification datasets. We further analyze the impact of the number of folds by considering 3-, 5-, and 10-folds. In addition, we ablate the impact of early stopping on Bayesian optimization and also repeated cross-validation. Our exploratory study shows that even a simple-to-understand and easy-to-implement method consistently allows model selection to converge faster; in ${\sim}$94% of all datasets, on average by 214%. Moreover, stopping cross-validation enables model selection to explore the search space more exhaustively by considering +167% configurations on average, while also obtaining better overall performance.} }
Endnote
%0 Conference Paper %T Don’t Waste Your Time: Early Stopping Cross-Validation %A Edward Bergman %A Lennart Purucker %A Frank Hutter %B Proceedings of the Third International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Katharina Eggensperger %E Roman Garnett %E Joaquin Vanschoren %E Marius Lindauer %E Jacob R. Gardner %F pmlr-v256-bergman24a %I PMLR %P 9/1--31 %U https://proceedings.mlr.press/v256/bergman24a.html %V 256 %X State-of-the-art automated machine learning systems for tabular data often employ cross-validation; ensuring that measured performances generalize to unseen data, or that subsequent ensembling does not overfit. However, using k-fold cross-validation instead of holdout validation drastically increases the computational cost of validating a single configuration. While ensuring better generalization and, by extension, better performance, the additional cost is often prohibitive for effective model selection within a time budget. We aim to make model selection with cross-validation more effective. Therefore, we study early stopping the process of cross-validation during model selection. We investigate the impact of early stopping on random search for two algorithms, MLP and random forest, across 36 classification datasets. We further analyze the impact of the number of folds by considering 3-, 5-, and 10-folds. In addition, we ablate the impact of early stopping on Bayesian optimization and also repeated cross-validation. Our exploratory study shows that even a simple-to-understand and easy-to-implement method consistently allows model selection to converge faster; in ${\sim}$94% of all datasets, on average by 214%. Moreover, stopping cross-validation enables model selection to explore the search space more exhaustively by considering +167% configurations on average, while also obtaining better overall performance.
APA
Bergman, E., Purucker, L. & Hutter, F.. (2024). Don’t Waste Your Time: Early Stopping Cross-Validation. Proceedings of the Third International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 256:9/1-31 Available from https://proceedings.mlr.press/v256/bergman24a.html.

Related Material