Confidence Histograms for Model Reliability Analysis and Temperature Calibration

Farina Kock, Felix Thielke, Grzegorz Chlebus, Hans Meine
Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, PMLR 172:741-759, 2022.

Abstract

Proper estimation of uncertainty may help the adoption of deep learning-based solutions in clinical practice, when measurements can take error bounds into account and out-of-distribution situations can be reliably detected. Therefore, a variety of approaches have been proposed already, with varying requirements and computational effort. Uncertainty estimation is complicated by the fact that typical neural networks are overly confident; this effect is particularly prominent with the Dice loss, which is commonly used for image segmentation. Therefore, various methods for model calibration have been proposed to reduce the discrepancy between classifier confidence and the observed accuracy. In this work, we focus on the simple calibration method of introducing a temperature parameter for the softmax operation. This approach is not only appealing because of its mathematical simplicity, it also appears to be well-suited for countering the main distortion of the classifier output confidence levels. Finally, it comes at literally zero extra cost, because the necessary multiplications can be integrated into the previous layer’s weights after calibration, and a scalar temperature does not affect the classification at all. Our contributions are as follows: We thoroughly evaluate the confidence behavior of several models with different architectures, different numbers of output classes, different loss functions, and different segmentation tasks. In order to do so, we propose an efficient intermediate representation and some adaptations of reliability diagrams to semantic segmentation. We investigate different calibration measures and their optimal temperatures for these diverse models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v172-kock22a, title = {Confidence Histograms for Model Reliability Analysis and Temperature Calibration}, author = {Kock, Farina and Thielke, Felix and Chlebus, Grzegorz and Meine, Hans}, booktitle = {Proceedings of The 5th International Conference on Medical Imaging with Deep Learning}, pages = {741--759}, year = {2022}, editor = {Konukoglu, Ender and Menze, Bjoern and Venkataraman, Archana and Baumgartner, Christian and Dou, Qi and Albarqouni, Shadi}, volume = {172}, series = {Proceedings of Machine Learning Research}, month = {06--08 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v172/kock22a/kock22a.pdf}, url = {https://proceedings.mlr.press/v172/kock22a.html}, abstract = {Proper estimation of uncertainty may help the adoption of deep learning-based solutions in clinical practice, when measurements can take error bounds into account and out-of-distribution situations can be reliably detected. Therefore, a variety of approaches have been proposed already, with varying requirements and computational effort. Uncertainty estimation is complicated by the fact that typical neural networks are overly confident; this effect is particularly prominent with the Dice loss, which is commonly used for image segmentation. Therefore, various methods for model calibration have been proposed to reduce the discrepancy between classifier confidence and the observed accuracy. In this work, we focus on the simple calibration method of introducing a temperature parameter for the softmax operation. This approach is not only appealing because of its mathematical simplicity, it also appears to be well-suited for countering the main distortion of the classifier output confidence levels. Finally, it comes at literally zero extra cost, because the necessary multiplications can be integrated into the previous layer’s weights after calibration, and a scalar temperature does not affect the classification at all. Our contributions are as follows: We thoroughly evaluate the confidence behavior of several models with different architectures, different numbers of output classes, different loss functions, and different segmentation tasks. In order to do so, we propose an efficient intermediate representation and some adaptations of reliability diagrams to semantic segmentation. We investigate different calibration measures and their optimal temperatures for these diverse models.} }
Endnote
%0 Conference Paper %T Confidence Histograms for Model Reliability Analysis and Temperature Calibration %A Farina Kock %A Felix Thielke %A Grzegorz Chlebus %A Hans Meine %B Proceedings of The 5th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2022 %E Ender Konukoglu %E Bjoern Menze %E Archana Venkataraman %E Christian Baumgartner %E Qi Dou %E Shadi Albarqouni %F pmlr-v172-kock22a %I PMLR %P 741--759 %U https://proceedings.mlr.press/v172/kock22a.html %V 172 %X Proper estimation of uncertainty may help the adoption of deep learning-based solutions in clinical practice, when measurements can take error bounds into account and out-of-distribution situations can be reliably detected. Therefore, a variety of approaches have been proposed already, with varying requirements and computational effort. Uncertainty estimation is complicated by the fact that typical neural networks are overly confident; this effect is particularly prominent with the Dice loss, which is commonly used for image segmentation. Therefore, various methods for model calibration have been proposed to reduce the discrepancy between classifier confidence and the observed accuracy. In this work, we focus on the simple calibration method of introducing a temperature parameter for the softmax operation. This approach is not only appealing because of its mathematical simplicity, it also appears to be well-suited for countering the main distortion of the classifier output confidence levels. Finally, it comes at literally zero extra cost, because the necessary multiplications can be integrated into the previous layer’s weights after calibration, and a scalar temperature does not affect the classification at all. Our contributions are as follows: We thoroughly evaluate the confidence behavior of several models with different architectures, different numbers of output classes, different loss functions, and different segmentation tasks. In order to do so, we propose an efficient intermediate representation and some adaptations of reliability diagrams to semantic segmentation. We investigate different calibration measures and their optimal temperatures for these diverse models.
APA
Kock, F., Thielke, F., Chlebus, G. & Meine, H.. (2022). Confidence Histograms for Model Reliability Analysis and Temperature Calibration. Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 172:741-759 Available from https://proceedings.mlr.press/v172/kock22a.html.

Related Material