[edit]
A constrained Bayesian approach to out-of-distribution prediction
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:2248-2258, 2023.
Abstract
Consider the problem of out-of-distribution prediction given data from multiple environments. While a sufficiently diverse collection of training environments will facilitate the identification of an invariant predictor, with an optimal generalization performance, many applications only provide us with a limited number of environments. It is thus necessary to consider adapting to distribution shift using a handful of labeled test samples. We propose a constrained Bayesian approach for this task, which restricts to models with a worst-group training loss above a prespecified threshold. Our method avoids a pathology of the standard Bayesian posterior, which occurs when spurious correlations improve in-distribution prediction. We also show that on certain high-dimensional linear problems, constrained modeling improves the sample efficiency of adaptation. Synthetic and real-world experiments demonstrate the robust performance of our approach.