[edit]
The sample complexity of multi-distribution learning
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4185-4204, 2024.
Abstract
Multi-distribution learning generalizes the classic PAC learning to handle data coming from multiple distributions. Given a set of k data distributions and a hypothesis class of VC dimension d, the goal is to learn a hypothesis that minimizes the maximum population loss over k distributions, up to ϵ additive error. In this paper, we settle the sample complexity of multi-distribution learning by giving an algorithm of sample complexity ˜O((d+k)ϵ−2)⋅(k/ϵ)o(1). This matches the lower bound up to sub-polynomial factor and resolves the COLT 2023 open problem of Awasthi, Haghtalab and Zhao.