[edit]
Confidence Histograms for Model Reliability Analysis and Temperature Calibration
Proceedings of The 5th International Conference on Medical Imaging with Deep Learning, PMLR 172:741-759, 2022.
Abstract
Proper estimation of uncertainty may help the adoption of deep learning-based solutions in clinical practice, when measurements can take error bounds into account and out-of-distribution situations can be reliably detected. Therefore, a variety of approaches have been proposed already, with varying requirements and computational effort. Uncertainty estimation is complicated by the fact that typical neural networks are overly confident; this effect is particularly prominent with the Dice loss, which is commonly used for image segmentation. Therefore, various methods for model calibration have been proposed to reduce the discrepancy between classifier confidence and the observed accuracy. In this work, we focus on the simple calibration method of introducing a temperature parameter for the softmax operation. This approach is not only appealing because of its mathematical simplicity, it also appears to be well-suited for countering the main distortion of the classifier output confidence levels. Finally, it comes at literally zero extra cost, because the necessary multiplications can be integrated into the previous layer’s weights after calibration, and a scalar temperature does not affect the classification at all. Our contributions are as follows: We thoroughly evaluate the confidence behavior of several models with different architectures, different numbers of output classes, different loss functions, and different segmentation tasks. In order to do so, we propose an efficient intermediate representation and some adaptations of reliability diagrams to semantic segmentation. We investigate different calibration measures and their optimal temperatures for these diverse models.