Sparsity, variance and curvature in multiarmed bandits
[edit]
Proceedings of Algorithmic Learning Theory, PMLR 83:111127, 2018.
Abstract
In (online) learning theory the concepts of sparsity, variance and curvature are wellunderstood and are routinely used to obtain refined regret and generalization bounds. In this paper we further our understanding of these concepts in the more challenging limited feedback scenario. We consider the adversarial multiarmed bandit and linear bandit settings and solve several open problems pertaining to the existence of algorithms with favorable regret bounds under the following assumptions: (i) sparsity of the individual losses, (ii) small variation of the loss sequence, and (iii) curvature of the action set. Specifically we show that (i) for $s$sparse losses one can obtain $\tilde{O}(\sqrt{s T})$regret (solving an open problem by Kwon and Perchet), (ii) for loss sequences with variation bounded by $Q$ one can obtain $\tilde{O}(\sqrt{Q})$regret (solving an open problem by Kale and Hazan), and (iii) for linear bandit on an $\ell_p^n$ ball one can obtain $\tilde{O}(\sqrt{n T})$regret for $p ∈[1,2]$ and one has $\tilde{Ω}(n \sqrt{T})$regret for $p>2$ (solving an open problem by Bubeck, CesaBianchi and Kakade). A key new insight to obtain these results is to use regularizers satisfying more refined conditions than general selfconcordance.
Related Material


