On double-descent in uncertainty quantification in overparametrized models

Lucas Clarte, Bruno Loureiro, Florent Krzakala, Lenka Zdeborova
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:7089-7125, 2023.

Abstract

Uncertainty quantification is a central challenge in reliable and trustworthy machine learning. Naive measures such as last-layer scores are well-known to yield overconfident estimates in the context of overparametrized neural networks. Several methods, ranging from temperature scaling to different Bayesian treatments of neural networks, have been proposed to mitigate overconfidence, most often supported by the numerical observation that they yield better calibrated uncertainty measures. In this work, we provide a sharp comparison between popular uncertainty measures for binary classification in a mathematically tractable model for overparametrized neural networks: the random features model. We discuss a trade-off between classification accuracy and calibration, unveiling a double descent behavior in the calibration curve of optimally regularised estimators as a function of overparametrization. This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-clarte23a, title = {On double-descent in uncertainty quantification in overparametrized models}, author = {Clarte, Lucas and Loureiro, Bruno and Krzakala, Florent and Zdeborova, Lenka}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {7089--7125}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/clarte23a/clarte23a.pdf}, url = {https://proceedings.mlr.press/v206/clarte23a.html}, abstract = {Uncertainty quantification is a central challenge in reliable and trustworthy machine learning. Naive measures such as last-layer scores are well-known to yield overconfident estimates in the context of overparametrized neural networks. Several methods, ranging from temperature scaling to different Bayesian treatments of neural networks, have been proposed to mitigate overconfidence, most often supported by the numerical observation that they yield better calibrated uncertainty measures. In this work, we provide a sharp comparison between popular uncertainty measures for binary classification in a mathematically tractable model for overparametrized neural networks: the random features model. We discuss a trade-off between classification accuracy and calibration, unveiling a double descent behavior in the calibration curve of optimally regularised estimators as a function of overparametrization. This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.} }
Endnote
%0 Conference Paper %T On double-descent in uncertainty quantification in overparametrized models %A Lucas Clarte %A Bruno Loureiro %A Florent Krzakala %A Lenka Zdeborova %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-clarte23a %I PMLR %P 7089--7125 %U https://proceedings.mlr.press/v206/clarte23a.html %V 206 %X Uncertainty quantification is a central challenge in reliable and trustworthy machine learning. Naive measures such as last-layer scores are well-known to yield overconfident estimates in the context of overparametrized neural networks. Several methods, ranging from temperature scaling to different Bayesian treatments of neural networks, have been proposed to mitigate overconfidence, most often supported by the numerical observation that they yield better calibrated uncertainty measures. In this work, we provide a sharp comparison between popular uncertainty measures for binary classification in a mathematically tractable model for overparametrized neural networks: the random features model. We discuss a trade-off between classification accuracy and calibration, unveiling a double descent behavior in the calibration curve of optimally regularised estimators as a function of overparametrization. This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.
APA
Clarte, L., Loureiro, B., Krzakala, F. & Zdeborova, L.. (2023). On double-descent in uncertainty quantification in overparametrized models. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:7089-7125 Available from https://proceedings.mlr.press/v206/clarte23a.html.

Related Material