High Probability Risk Control Under Covariate Shift

Duarte C. Almeida, João Bravo, Jacopo Bono, Pedro Bizarro, Mário A. T. Figueiredo
Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications, PMLR 266:133-152, 2025.

Abstract

Distribution-free uncertainty quantification is an emerging field, which encompasses risk control techniques in finite sample settings with minimal distributional assumptions, making it suitable for high-stakes applications. In particular, high-probability risk control methods, namely the learn then test (LTT) framework, use a calibration set to control multiple risks with high confidence. However, these methods rely on the assumption that the calibration and target distributions are identical, which can pose challenges, for example, when controlling label-dependent risks under the absence of labeled target data. In this work, we propose a novel extension of LTT that handles covariate shifts by directly weighting calibration losses with importance weights. We validate our method on a synthetic fraud detection task, aiming to control the false positive rate while minimizing false negatives, and on an image classification task, to control the miscoverage of a set predictor while minimizing the average set size. The results show that our approach consistently yields less conservative risk control than existing baselines based on rejection sampling, which results in overall lower false negative rates and smaller prediction sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v266-almeida25a, title = {High Probability Risk Control Under Covariate Shift}, author = {Almeida, Duarte C. and Bravo, Jo\~{a}o and Bono, Jacopo and Bizarro, Pedro and Figueiredo, M\'{a}rio A. T.}, booktitle = {Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications}, pages = {133--152}, year = {2025}, editor = {Nguyen, Khuong An and Luo, Zhiyuan and Papadopoulos, Harris and Löfström, Tuwe and Carlsson, Lars and Boström, Henrik}, volume = {266}, series = {Proceedings of Machine Learning Research}, month = {10--12 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v266/main/assets/almeida25a/almeida25a.pdf}, url = {https://proceedings.mlr.press/v266/almeida25a.html}, abstract = {Distribution-free uncertainty quantification is an emerging field, which encompasses risk control techniques in finite sample settings with minimal distributional assumptions, making it suitable for high-stakes applications. In particular, high-probability risk control methods, namely the learn then test (LTT) framework, use a calibration set to control multiple risks with high confidence. However, these methods rely on the assumption that the calibration and target distributions are identical, which can pose challenges, for example, when controlling label-dependent risks under the absence of labeled target data. In this work, we propose a novel extension of LTT that handles covariate shifts by directly weighting calibration losses with importance weights. We validate our method on a synthetic fraud detection task, aiming to control the false positive rate while minimizing false negatives, and on an image classification task, to control the miscoverage of a set predictor while minimizing the average set size. The results show that our approach consistently yields less conservative risk control than existing baselines based on rejection sampling, which results in overall lower false negative rates and smaller prediction sets.} }
Endnote
%0 Conference Paper %T High Probability Risk Control Under Covariate Shift %A Duarte C. Almeida %A João Bravo %A Jacopo Bono %A Pedro Bizarro %A Mário A. T. Figueiredo %B Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications %C Proceedings of Machine Learning Research %D 2025 %E Khuong An Nguyen %E Zhiyuan Luo %E Harris Papadopoulos %E Tuwe Löfström %E Lars Carlsson %E Henrik Boström %F pmlr-v266-almeida25a %I PMLR %P 133--152 %U https://proceedings.mlr.press/v266/almeida25a.html %V 266 %X Distribution-free uncertainty quantification is an emerging field, which encompasses risk control techniques in finite sample settings with minimal distributional assumptions, making it suitable for high-stakes applications. In particular, high-probability risk control methods, namely the learn then test (LTT) framework, use a calibration set to control multiple risks with high confidence. However, these methods rely on the assumption that the calibration and target distributions are identical, which can pose challenges, for example, when controlling label-dependent risks under the absence of labeled target data. In this work, we propose a novel extension of LTT that handles covariate shifts by directly weighting calibration losses with importance weights. We validate our method on a synthetic fraud detection task, aiming to control the false positive rate while minimizing false negatives, and on an image classification task, to control the miscoverage of a set predictor while minimizing the average set size. The results show that our approach consistently yields less conservative risk control than existing baselines based on rejection sampling, which results in overall lower false negative rates and smaller prediction sets.
APA
Almeida, D.C., Bravo, J., Bono, J., Bizarro, P. & Figueiredo, M.A.T.. (2025). High Probability Risk Control Under Covariate Shift. Proceedings of the Fourteenth Symposium on Conformal and Probabilistic Prediction with Applications, in Proceedings of Machine Learning Research 266:133-152 Available from https://proceedings.mlr.press/v266/almeida25a.html.

Related Material