Adapting Prediction Sets to Distribution Shifts Without Labels

Kevin Kasa, Zhiyu Zhang, Heng Yang, Graham W. Taylor
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:1990-2010, 2025.

Abstract

Recently there has been a surge of interest to deploy confidence set predictions rather than point predictions in machine learning. Unfortunately, the effectiveness of such prediction sets is frequently impaired by distribution shifts in practice, and the challenge is often compounded by the lack of ground truth labels at test time. Focusing on a standard set-valued prediction framework called conformal prediction (CP), this paper studies how to improve its practical performance using only unlabeled data from the shifted test domain. This is achieved by two new methods called $\texttt{ECP}$ and $\texttt{E{\small A}CP}$, whose main idea is to adjust the score function in CP according to its base model’s own uncertainty evaluation. Through extensive experiments on a number of large-scale datasets and neural network architectures, we show that our methods provide consistent improvement over existing baselines and nearly match the performance of fully supervised methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-kasa25a, title = {Adapting Prediction Sets to Distribution Shifts Without Labels}, author = {Kasa, Kevin and Zhang, Zhiyu and Yang, Heng and Taylor, Graham W.}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {1990--2010}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/kasa25a/kasa25a.pdf}, url = {https://proceedings.mlr.press/v286/kasa25a.html}, abstract = {Recently there has been a surge of interest to deploy confidence set predictions rather than point predictions in machine learning. Unfortunately, the effectiveness of such prediction sets is frequently impaired by distribution shifts in practice, and the challenge is often compounded by the lack of ground truth labels at test time. Focusing on a standard set-valued prediction framework called conformal prediction (CP), this paper studies how to improve its practical performance using only unlabeled data from the shifted test domain. This is achieved by two new methods called $\texttt{ECP}$ and $\texttt{E{\small A}CP}$, whose main idea is to adjust the score function in CP according to its base model’s own uncertainty evaluation. Through extensive experiments on a number of large-scale datasets and neural network architectures, we show that our methods provide consistent improvement over existing baselines and nearly match the performance of fully supervised methods.} }
Endnote
%0 Conference Paper %T Adapting Prediction Sets to Distribution Shifts Without Labels %A Kevin Kasa %A Zhiyu Zhang %A Heng Yang %A Graham W. Taylor %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-kasa25a %I PMLR %P 1990--2010 %U https://proceedings.mlr.press/v286/kasa25a.html %V 286 %X Recently there has been a surge of interest to deploy confidence set predictions rather than point predictions in machine learning. Unfortunately, the effectiveness of such prediction sets is frequently impaired by distribution shifts in practice, and the challenge is often compounded by the lack of ground truth labels at test time. Focusing on a standard set-valued prediction framework called conformal prediction (CP), this paper studies how to improve its practical performance using only unlabeled data from the shifted test domain. This is achieved by two new methods called $\texttt{ECP}$ and $\texttt{E{\small A}CP}$, whose main idea is to adjust the score function in CP according to its base model’s own uncertainty evaluation. Through extensive experiments on a number of large-scale datasets and neural network architectures, we show that our methods provide consistent improvement over existing baselines and nearly match the performance of fully supervised methods.
APA
Kasa, K., Zhang, Z., Yang, H. & Taylor, G.W.. (2025). Adapting Prediction Sets to Distribution Shifts Without Labels. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:1990-2010 Available from https://proceedings.mlr.press/v286/kasa25a.html.

Related Material