Legitimate ground-truth-free metrics for deep uncertainty classification scoring

Arthur Pignet, Chiara Regniez, John Klein
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:2197-2205, 2025.

Abstract

Despite the increasing demand for safer machine learning practices, the use of Uncertainty Quantification (UQ) methods in production remains limited. This limitation is exacerbated by the challenge of validating UQ methods in absence of UQ ground truth. In classification tasks, when only a usual set of test data is at hand, several authors suggested different metrics that can be computed from such test points while assessing the quality of quantified uncertainties. This paper investigates such metrics and proves that they are theoretically well-behaved and actually tied to some uncertainty ground truth which is easily interpretable in terms of model prediction trustworthiness ranking. Equipped with those new results, and given the applicability of those metrics in the usual supervised paradigm, we argue that our contributions will help promoting a broader use of UQ in deep learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-pignet25a, title = {Legitimate ground-truth-free metrics for deep uncertainty classification scoring}, author = {Pignet, Arthur and Regniez, Chiara and Klein, John}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {2197--2205}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/pignet25a/pignet25a.pdf}, url = {https://proceedings.mlr.press/v258/pignet25a.html}, abstract = {Despite the increasing demand for safer machine learning practices, the use of Uncertainty Quantification (UQ) methods in production remains limited. This limitation is exacerbated by the challenge of validating UQ methods in absence of UQ ground truth. In classification tasks, when only a usual set of test data is at hand, several authors suggested different metrics that can be computed from such test points while assessing the quality of quantified uncertainties. This paper investigates such metrics and proves that they are theoretically well-behaved and actually tied to some uncertainty ground truth which is easily interpretable in terms of model prediction trustworthiness ranking. Equipped with those new results, and given the applicability of those metrics in the usual supervised paradigm, we argue that our contributions will help promoting a broader use of UQ in deep learning.} }
Endnote
%0 Conference Paper %T Legitimate ground-truth-free metrics for deep uncertainty classification scoring %A Arthur Pignet %A Chiara Regniez %A John Klein %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-pignet25a %I PMLR %P 2197--2205 %U https://proceedings.mlr.press/v258/pignet25a.html %V 258 %X Despite the increasing demand for safer machine learning practices, the use of Uncertainty Quantification (UQ) methods in production remains limited. This limitation is exacerbated by the challenge of validating UQ methods in absence of UQ ground truth. In classification tasks, when only a usual set of test data is at hand, several authors suggested different metrics that can be computed from such test points while assessing the quality of quantified uncertainties. This paper investigates such metrics and proves that they are theoretically well-behaved and actually tied to some uncertainty ground truth which is easily interpretable in terms of model prediction trustworthiness ranking. Equipped with those new results, and given the applicability of those metrics in the usual supervised paradigm, we argue that our contributions will help promoting a broader use of UQ in deep learning.
APA
Pignet, A., Regniez, C. & Klein, J.. (2025). Legitimate ground-truth-free metrics for deep uncertainty classification scoring. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:2197-2205 Available from https://proceedings.mlr.press/v258/pignet25a.html.

Related Material