Smallloss bounds for online learning with partial information
[edit]
Proceedings of the 31st Conference On Learning Theory, PMLR 75:979986, 2018.
Abstract
We consider the problem of adversarial (nonstochastic) online learning with partial information feedback, where at each round, a decision maker selects an action from a finite set of alternatives. We develop a blackbox approach for such problems where the learner observes as feedback only losses of a subset of the actions that includes the selected action. When losses of actions are nonnegative, under the graphbased feedback model introduced by Mannor and Shamir, we offer algorithms that attain the so called “smallloss” $o(\alpha L^{\star})$ regret bounds with high probability, where $\alpha$ is the independence number of the graph, and $L^{\star}$ is the loss of the best action. Prior to our work, there was no datadependent guarantee for general feedback graphs even for pseudoregret (without dependence on the number of actions, i.e. utilizing the increased information feedback). Taking advantage of the blackbox nature of our technique, we extend our results to many other applications such as semibandits (including routing in networks), contextual bandits (even with an infinite comparator class), as well as learning with slowly changing (shifting) comparators. In the special case of classical bandit and semibandit problems, we provide optimal smallloss, highprobability guarantees of $\tilde{O}(\sqrt{dL^{\star}})$ for actual regret, where $d$ is the number of actions, answering open questions of Neu. Previous bounds for bandits and semibandits were known only for pseudoregret and only in expectation. We also offer an optimal $\tilde{O}(\sqrt{\kappa L^{\star}})$ regret guarantee for fixed feedback graphs with cliquepartition number at most $\kappa$.
Related Material


