[edit]
Weak to Strong Learning from Aggregate Labels
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2876-2891, 2025.
Abstract
In learning from aggregate labels, the training data consists of sets or “bags” of feature-vectors (instances) along with an aggregate label for each bag derived from the (usually $\\{0,1\\}$-valued) labels of its constituent instances. In *learning from label proportions* (LLP), the aggregate label of a bag is the average of the instance labels, whereas in *multiple instance learning* (MIL) it is the OR i.e., disjunction. The goal is to train an instance-level predictor that maximizes the accuracy which is the fraction of *satisfied* bags i.e., those on which the model’s induced labels are consistent with the target aggregate label. A weak learner in this context is one which has at a constant accuracy $ < 1$ on the training bags, while a strong learner’s accuracy can be arbitrarily close to $1$. We study the problem of using a weak learner on such training bags with aggregate labels to obtain a strong learner. In a novel result, our work proves the impossibility of boosting in the LLP setting using weak learners of any accuracy $< 1$ by constructing a collection of bags for which such weak learners (for any weight assignment) exist, while not admitting any strong learner. A variant of this construction also rules out boosting in MIL for a non-trivial range of weak learner accuracy. In the LLP setting however, we show that a weak learner (with small accuracy) on large enough bags can in fact be used to obtain a strong learner for small bags, in polynomial time. We also provide more efficient, sampling based variant of our procedure with probabilistic guarantees which are empirically validated on three real and two synthetic datasets.