[edit]
The Real Price of Bandit Information in Multiclass Classification
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:1573-1598, 2024.
Abstract
We revisit the classical problem of multiclass classification with bandit feedback (Kakade, Shalev-Shwartz and Tewari, 2008), where each input classifies to one of K possible labels and feedback is restricted to whether the predicted label is correct or not. Our primary inquiry is with regard to the dependency on the number of labels K, and whether T-step regret bounds in this setting can be improved beyond the √KT dependence exhibited by existing algorithms. Our main contribution is in showing that the minimax regret of bandit multiclass is in fact more nuanced, and is of the form ˜Θ(min, where \mathcal{H} is the underlying (finite) hypothesis class. In particular, we present a new bandit classification algorithm that guarantees regret \smash{\widetilde{O}(|\mathcal{H}|+\sqrt{T})}, improving over classical algorithms for moderately-sized hypothesis classes, and give a matching lower bound establishing tightness of the upper bounds (up to log-factors) in all parameter regimes.