[edit]
Minimax Regret Optimization for Robust Machine Learning under Distribution Shift
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2704-2729, 2022.
Abstract
In this paper, we consider learning scenarios where the learned model is evaluated under an unknown test distribution which potentially differs from the training distribution (i.e. distribution shift). The learner has access to a family of weight functions such that the test distribution is a reweighting of the training distribution under one of these functions, a setting typically studied under the name of Distributionally Robust Optimization (DRO). We consider the problem of deriving regret bounds in the classical learning theory setting, and require that the resulting regret bounds hold uniformly for all potential test distributions. We show that the DRO formulation does not guarantee uniformly small regret under distribution shift. We instead propose an alternative method called Minimax Regret Optimization (MRO), and show that under suitable conditions, this method achieves uniformly low regret across all test distributions. We also adapt our technique to have strong guarantees when the test distributions are heterogeneous in their similarity to the training data. Given the widespead optimization of worst case risks in current approaches to robust machine learning, we believe that MRO can be an attractive framework to address a broad range of distribution shift scenarios.