[edit]
Learn to Expect the Unexpected: Probably Approximately Correct Domain Generalization
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:3574-3582, 2021.
Abstract
Domain generalization is the problem of machine learning when the training data and the test data come from different “domains” (data distributions). We propose an elementary theoretical model of the domain generalization problem, introducing the concept of a meta-distribution over domains. In our model, the training data available to a learning algorithm consist of multiple datasets, each from a single domain, drawn in turn from the meta-distribution. We show that our model can capture a rich range of learning phenomena specific to domain generalization for three different settings: learning with Massart noise, learning decision trees, and feature selection. We demonstrate approaches that leverage domain generalization to reduce computational or data requirements in each of these settings. Experiments demonstrate that our feature selection algorithm indeed ignores spurious correlations and improves generalization.