Robustness quantification: a new method for assessing the reliability of the predictions of a classifier

Adrián Detavernier, Jasper De Bock
Proceedings of the Fourteenth International Symposium on Imprecise Probabilities: Theories and Applications, PMLR 290:126-136, 2025.

Abstract

Based on existing ideas in the field of imprecise probabilities, we present a new approach for assessing the reliability of the individual predictions of a generative probabilistic classifier. We call this approach robustness quantification, compare it to uncertainty quantification, and demonstrate that it continues to work well even for classifiers that are learned from small training sets that are sampled from a shifted distribution.

Cite this Paper


BibTeX
@InProceedings{pmlr-v290-detavernier25a, title = {Robustness quantification: a new method for assessing the reliability of the predictions of a classifier}, author = {Detavernier, Adri\'an and De Bock, Jasper}, booktitle = {Proceedings of the Fourteenth International Symposium on Imprecise Probabilities: Theories and Applications}, pages = {126--136}, year = {2025}, editor = {Destercke, Sébastien and Erreygers, Alexander and Nendel, Max and Riedel, Frank and Troffaes, Matthias C. M.}, volume = {290}, series = {Proceedings of Machine Learning Research}, month = {15--18 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v290/main/assets/detavernier25a/detavernier25a.pdf}, url = {https://proceedings.mlr.press/v290/detavernier25a.html}, abstract = {Based on existing ideas in the field of imprecise probabilities, we present a new approach for assessing the reliability of the individual predictions of a generative probabilistic classifier. We call this approach robustness quantification, compare it to uncertainty quantification, and demonstrate that it continues to work well even for classifiers that are learned from small training sets that are sampled from a shifted distribution.} }
Endnote
%0 Conference Paper %T Robustness quantification: a new method for assessing the reliability of the predictions of a classifier %A Adrián Detavernier %A Jasper De Bock %B Proceedings of the Fourteenth International Symposium on Imprecise Probabilities: Theories and Applications %C Proceedings of Machine Learning Research %D 2025 %E Sébastien Destercke %E Alexander Erreygers %E Max Nendel %E Frank Riedel %E Matthias C. M. Troffaes %F pmlr-v290-detavernier25a %I PMLR %P 126--136 %U https://proceedings.mlr.press/v290/detavernier25a.html %V 290 %X Based on existing ideas in the field of imprecise probabilities, we present a new approach for assessing the reliability of the individual predictions of a generative probabilistic classifier. We call this approach robustness quantification, compare it to uncertainty quantification, and demonstrate that it continues to work well even for classifiers that are learned from small training sets that are sampled from a shifted distribution.
APA
Detavernier, A. & De Bock, J.. (2025). Robustness quantification: a new method for assessing the reliability of the predictions of a classifier. Proceedings of the Fourteenth International Symposium on Imprecise Probabilities: Theories and Applications, in Proceedings of Machine Learning Research 290:126-136 Available from https://proceedings.mlr.press/v290/detavernier25a.html.

Related Material