[edit]
Adapting Prediction Sets to Distribution Shifts Without Labels
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:1990-2010, 2025.
Abstract
Recently there has been a surge of interest to deploy confidence set predictions rather than point predictions in machine learning. Unfortunately, the effectiveness of such prediction sets is frequently impaired by distribution shifts in practice, and the challenge is often compounded by the lack of ground truth labels at test time. Focusing on a standard set-valued prediction framework called conformal prediction (CP), this paper studies how to improve its practical performance using only unlabeled data from the shifted test domain. This is achieved by two new methods called $\texttt{ECP}$ and $\texttt{E{\small A}CP}$, whose main idea is to adjust the score function in CP according to its base model’s own uncertainty evaluation. Through extensive experiments on a number of large-scale datasets and neural network architectures, we show that our methods provide consistent improvement over existing baselines and nearly match the performance of fully supervised methods.