Local calibration: metrics and recalibration

Rachel Luo, Aadyot Bhatnagar, Yu Bai, Shengjia Zhao, Huan Wang, Caiming Xiong, Silvio Savarese, Stefano Ermon, Edward Schmerling, Marco Pavone
Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:1286-1295, 2022.

Abstract

Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores should be calibrated, i.e., they should reflect the reliability of the prediction. Confidence scores that minimize standard metrics such as the expected calibration error (ECE) accurately measure the reliability \textit{on average} across the entire population. However, it is in general impossible to measure the reliability of an \textit{individual} prediction. In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability. For each individual prediction, the LCE measures the average reliability of a set of similar predictions, where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences. We show theoretically that the LCE can be estimated sample-efficiently from data, and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect. Our key result is a novel {\textbf{lo}cal \textbf{re}calibration} method \method{}, to improve confidence scores for individual predictions and decrease the LCE. Experimentally, we show that our recalibration method produces more accurate confidence scores, which improves downstream fairness and decision making on classification tasks with both image and tabular data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v180-luo22a, title = {Local calibration: metrics and recalibration}, author = {Luo, Rachel and Bhatnagar, Aadyot and Bai, Yu and Zhao, Shengjia and Wang, Huan and Xiong, Caiming and Savarese, Silvio and Ermon, Stefano and Schmerling, Edward and Pavone, Marco}, booktitle = {Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence}, pages = {1286--1295}, year = {2022}, editor = {Cussens, James and Zhang, Kun}, volume = {180}, series = {Proceedings of Machine Learning Research}, month = {01--05 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v180/luo22a/luo22a.pdf}, url = {https://proceedings.mlr.press/v180/luo22a.html}, abstract = {Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores should be calibrated, i.e., they should reflect the reliability of the prediction. Confidence scores that minimize standard metrics such as the expected calibration error (ECE) accurately measure the reliability \textit{on average} across the entire population. However, it is in general impossible to measure the reliability of an \textit{individual} prediction. In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability. For each individual prediction, the LCE measures the average reliability of a set of similar predictions, where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences. We show theoretically that the LCE can be estimated sample-efficiently from data, and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect. Our key result is a novel {\textbf{lo}cal \textbf{re}calibration} method \method{}, to improve confidence scores for individual predictions and decrease the LCE. Experimentally, we show that our recalibration method produces more accurate confidence scores, which improves downstream fairness and decision making on classification tasks with both image and tabular data.} }
Endnote
%0 Conference Paper %T Local calibration: metrics and recalibration %A Rachel Luo %A Aadyot Bhatnagar %A Yu Bai %A Shengjia Zhao %A Huan Wang %A Caiming Xiong %A Silvio Savarese %A Stefano Ermon %A Edward Schmerling %A Marco Pavone %B Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2022 %E James Cussens %E Kun Zhang %F pmlr-v180-luo22a %I PMLR %P 1286--1295 %U https://proceedings.mlr.press/v180/luo22a.html %V 180 %X Probabilistic classifiers output confidence scores along with their predictions, and these confidence scores should be calibrated, i.e., they should reflect the reliability of the prediction. Confidence scores that minimize standard metrics such as the expected calibration error (ECE) accurately measure the reliability \textit{on average} across the entire population. However, it is in general impossible to measure the reliability of an \textit{individual} prediction. In this work, we propose the local calibration error (LCE) to span the gap between average and individual reliability. For each individual prediction, the LCE measures the average reliability of a set of similar predictions, where similarity is quantified by a kernel function on a pretrained feature space and by a binning scheme over predicted model confidences. We show theoretically that the LCE can be estimated sample-efficiently from data, and empirically find that it reveals miscalibration modes that are more fine-grained than the ECE can detect. Our key result is a novel {\textbf{lo}cal \textbf{re}calibration} method \method{}, to improve confidence scores for individual predictions and decrease the LCE. Experimentally, we show that our recalibration method produces more accurate confidence scores, which improves downstream fairness and decision making on classification tasks with both image and tabular data.
APA
Luo, R., Bhatnagar, A., Bai, Y., Zhao, S., Wang, H., Xiong, C., Savarese, S., Ermon, S., Schmerling, E. & Pavone, M.. (2022). Local calibration: metrics and recalibration. Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 180:1286-1295 Available from https://proceedings.mlr.press/v180/luo22a.html.

Related Material