[edit]
Optimistic Rates for Learning from Label Proportions
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3437-3474, 2024.
Abstract
We consider a weakly supervised learning problem called Learning from Label Proportions (LLP), where examples are grouped into "bags" and only the average label within each bag is revealed to the learner. We study various learning rules for LLP that achieve PAC learning guarantees for classification loss. We establish that the classical Empirical Proportional Risk Minimization (EPRM) learning rule (Yu et al., 2014) achieves fast rates under realizability, but EPRM and similar proportion matching learning rules can fail in the agnostic setting. We also show that (1) a debiased proportional square loss, as well as (2) a recently proposed EasyLLP learning rule (Busa-Fekete et al., 2023) both achieve "optimistic rates" (Panchenko, 2002); in both the realizable and agnostic settings, their sample complexity is optimal (up to log factors) in terms of $\epsilon, \delta$, and VC dimension.