Individual Calibration with Randomized Forecasting

Shengjia Zhao, Tengyu Ma, Stefano Ermon
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11387-11397, 2020.

Abstract

Machine learning applications often require calibrated predictions, e.g. a 90% credible interval should contain the true outcome 90% of the times. However, typical definitions of calibration only require this to hold on average, and offer no guarantees on predictions made on individual samples. Thus, predictions can be systematically over or under confident on certain subgroups, leading to issues of fairness and potential vulnerabilities. We show that calibration for individual samples is possible in the regression setup if and only if the predictions are randomized, i.e. outputting randomized credible intervals. Randomization removes systematic bias by trading off bias with variance. We design a training objective to enforce individual calibration and use it to train randomized regression functions. The resulting models are more calibrated for arbitrarily chosen subgroups of the data, and can achieve higher utility in decision making against adversaries that exploit miscalibrated predictions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhao20e, title = {Individual Calibration with Randomized Forecasting}, author = {Zhao, Shengjia and Ma, Tengyu and Ermon, Stefano}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11387--11397}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhao20e/zhao20e.pdf}, url = { http://proceedings.mlr.press/v119/zhao20e.html }, abstract = {Machine learning applications often require calibrated predictions, e.g. a 90% credible interval should contain the true outcome 90% of the times. However, typical definitions of calibration only require this to hold on average, and offer no guarantees on predictions made on individual samples. Thus, predictions can be systematically over or under confident on certain subgroups, leading to issues of fairness and potential vulnerabilities. We show that calibration for individual samples is possible in the regression setup if and only if the predictions are randomized, i.e. outputting randomized credible intervals. Randomization removes systematic bias by trading off bias with variance. We design a training objective to enforce individual calibration and use it to train randomized regression functions. The resulting models are more calibrated for arbitrarily chosen subgroups of the data, and can achieve higher utility in decision making against adversaries that exploit miscalibrated predictions.} }
Endnote
%0 Conference Paper %T Individual Calibration with Randomized Forecasting %A Shengjia Zhao %A Tengyu Ma %A Stefano Ermon %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhao20e %I PMLR %P 11387--11397 %U http://proceedings.mlr.press/v119/zhao20e.html %V 119 %X Machine learning applications often require calibrated predictions, e.g. a 90% credible interval should contain the true outcome 90% of the times. However, typical definitions of calibration only require this to hold on average, and offer no guarantees on predictions made on individual samples. Thus, predictions can be systematically over or under confident on certain subgroups, leading to issues of fairness and potential vulnerabilities. We show that calibration for individual samples is possible in the regression setup if and only if the predictions are randomized, i.e. outputting randomized credible intervals. Randomization removes systematic bias by trading off bias with variance. We design a training objective to enforce individual calibration and use it to train randomized regression functions. The resulting models are more calibrated for arbitrarily chosen subgroups of the data, and can achieve higher utility in decision making against adversaries that exploit miscalibrated predictions.
APA
Zhao, S., Ma, T. & Ermon, S.. (2020). Individual Calibration with Randomized Forecasting. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11387-11397 Available from http://proceedings.mlr.press/v119/zhao20e.html .

Related Material