[edit]
Learning Under Random Distributional Shifts
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3943-3951, 2024.
Abstract
Many existing approaches for generating predictions in settings with distribution shift model distribution shifts as adversarial or low-rank in suitable representations. In various real-world settings, however, we might expect shifts to arise through the superposition of many small and random changes in the population and environment. Thus, we consider a class of random distribution shift models that capture arbitrary changes in the underlying covariate space, and dense, random shocks to the relationship between the covariates and the outcomes. In this setting, we characterize the benefits and drawbacks of several alternative prediction strategies: the standard approach that directly predicts the long-term outcomes of interest, the proxy approach that directly predicts shorter-term proxy outcomes, and a hybrid approach that utilizes both the long-term policy outcome and (shorter-term) proxy outcome(s). We show that the hybrid approach is robust to the strength of the distribution shift and the proxy relationship. We apply this method to datasets in two high-impact domains: asylum-seeker placement and early childhood education. In both settings, we find that the proposed approach results in substantially lower mean-squared error than current approaches.