[edit]
Beyond Discrepancy: A Closer Look at the Theory of Distribution Shift
Proceedings of The 37th International Conference on Algorithmic Learning Theory, PMLR 313:1-19, 2026.
Abstract
Learning theory of distribution shift generally bounds performance on the target distribution as a function of the discrepancy between the source and target, rarely guaranteeing high target accuracy. Instead of relying on the discrepancy, we adopt an assumption inspired by Invariant Risk Minimization, where the source and target distributions are unified by an unknown feature projection. Under this assumption, we show that a learner can leverage the relationship between the source and target distributions to greatly reduce the number of required target samples to achieve high accuracy. To quantify this effect, we introduce a new combinatorial complexity measure—the distance dimension—and derive bounds for linear maps and neural networks.