Confidence Interval Estimation of Predictive Performance in the Context of AutoML

Konstantinos Paraschakis, Andrea Castellani, Giorgos Borboudakis, Ioannis Tsamardinos
Proceedings of the Third International Conference on Automated Machine Learning, PMLR 256:4/1-14, 2024.

Abstract

Any supervised machine learning analysis is required to provide an estimate of the out-of-sample predictive performance. However, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval (CI) and not just a point estimate. In an AutoML setting, estimating the CI is challenging due to the “winner’s curse”, i.e., the bias of estimation due to cross-validating several machine learning pipelines and selecting the winning one. In this work, we perform a comparative evaluation of 9 state-of-the-art methods and variants in CI estimation in an AutoML setting on a corpus of real and simulated datasets. The methods are compared in terms of inclusion percentage (does a 95% CI interval include the true performance at least 95% of the time), CI tightness (tighter CIs are preferable as being more informative), and execution time. The evaluation is the first one that covers most, if not all, such methods and extends previous work to multi-class, imbalanced, and small-sample tasks. In addition, we present a variant, called BBC-F, of an existing method (the Bootstrap Bias Correction, or BBC) that maintains the statistical properties of the BBC but is more computationally efficient. The results support that BBC-F and BBC dominate the other methods in all metrics measured. However, the results also point to open problems and challenges in producing accurate CIs of performance, particularly in the case of multi-class tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v256-paraschakis24a, title = {Confidence Interval Estimation of Predictive Performance in the Context of AutoML}, author = {Paraschakis, Konstantinos and Castellani, Andrea and Borboudakis, Giorgos and Tsamardinos, Ioannis}, booktitle = {Proceedings of the Third International Conference on Automated Machine Learning}, pages = {4/1--14}, year = {2024}, editor = {Eggensperger, Katharina and Garnett, Roman and Vanschoren, Joaquin and Lindauer, Marius and Gardner, Jacob R.}, volume = {256}, series = {Proceedings of Machine Learning Research}, month = {09--12 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v256/main/assets/paraschakis24a/paraschakis24a.pdf}, url = {https://proceedings.mlr.press/v256/paraschakis24a.html}, abstract = {Any supervised machine learning analysis is required to provide an estimate of the out-of-sample predictive performance. However, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval (CI) and not just a point estimate. In an AutoML setting, estimating the CI is challenging due to the “winner’s curse”, i.e., the bias of estimation due to cross-validating several machine learning pipelines and selecting the winning one. In this work, we perform a comparative evaluation of 9 state-of-the-art methods and variants in CI estimation in an AutoML setting on a corpus of real and simulated datasets. The methods are compared in terms of inclusion percentage (does a 95% CI interval include the true performance at least 95% of the time), CI tightness (tighter CIs are preferable as being more informative), and execution time. The evaluation is the first one that covers most, if not all, such methods and extends previous work to multi-class, imbalanced, and small-sample tasks. In addition, we present a variant, called BBC-F, of an existing method (the Bootstrap Bias Correction, or BBC) that maintains the statistical properties of the BBC but is more computationally efficient. The results support that BBC-F and BBC dominate the other methods in all metrics measured. However, the results also point to open problems and challenges in producing accurate CIs of performance, particularly in the case of multi-class tasks.} }
Endnote
%0 Conference Paper %T Confidence Interval Estimation of Predictive Performance in the Context of AutoML %A Konstantinos Paraschakis %A Andrea Castellani %A Giorgos Borboudakis %A Ioannis Tsamardinos %B Proceedings of the Third International Conference on Automated Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Katharina Eggensperger %E Roman Garnett %E Joaquin Vanschoren %E Marius Lindauer %E Jacob R. Gardner %F pmlr-v256-paraschakis24a %I PMLR %P 4/1--14 %U https://proceedings.mlr.press/v256/paraschakis24a.html %V 256 %X Any supervised machine learning analysis is required to provide an estimate of the out-of-sample predictive performance. However, it is imperative to also provide a quantification of the uncertainty of this performance in the form of a confidence or credible interval (CI) and not just a point estimate. In an AutoML setting, estimating the CI is challenging due to the “winner’s curse”, i.e., the bias of estimation due to cross-validating several machine learning pipelines and selecting the winning one. In this work, we perform a comparative evaluation of 9 state-of-the-art methods and variants in CI estimation in an AutoML setting on a corpus of real and simulated datasets. The methods are compared in terms of inclusion percentage (does a 95% CI interval include the true performance at least 95% of the time), CI tightness (tighter CIs are preferable as being more informative), and execution time. The evaluation is the first one that covers most, if not all, such methods and extends previous work to multi-class, imbalanced, and small-sample tasks. In addition, we present a variant, called BBC-F, of an existing method (the Bootstrap Bias Correction, or BBC) that maintains the statistical properties of the BBC but is more computationally efficient. The results support that BBC-F and BBC dominate the other methods in all metrics measured. However, the results also point to open problems and challenges in producing accurate CIs of performance, particularly in the case of multi-class tasks.
APA
Paraschakis, K., Castellani, A., Borboudakis, G. & Tsamardinos, I.. (2024). Confidence Interval Estimation of Predictive Performance in the Context of AutoML. Proceedings of the Third International Conference on Automated Machine Learning, in Proceedings of Machine Learning Research 256:4/1-14 Available from https://proceedings.mlr.press/v256/paraschakis24a.html.

Related Material