[edit]
Adaptive Bandits: Towards the best history-dependent strategy
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:570-578, 2011.
Abstract
We consider multi-armed bandit games with possibly adaptive opponents. We introduce models Θ of constraints based on equivalence classes on the common history (information shared by the player and the opponent) which define two learning scenarios: (1) The opponent is constrained, i.e. he provides rewards that are stochastic functions of equivalence classes defined by some model θ∗∈Θ. The regret is measured with respect to (w.r.t.) the best history-dependent strategy. (2) The opponent is arbitrary and we measure the regret w.r.t. the best strategy among all mappings from classes to actions (i.e. the best history-class-based strategy) for the best model in Θ. This allows to model opponents (case 1) or strategies (case 2) which handles finite memory, periodicity, standard stochastic bandits and other situations. When Θ={θ}, i.e. only one model is considered, we derive tractable algorithms achieving a tight regret (at time T) bounded by ˜O(√TAC), where C is the number of classes of θ. Now, when many models are available, all known algorithms achieving a nice regret O(√T) are unfortunately not tractable and scale poorly with the number of models |Θ|. Our contribution here is to provide tractable algorithms with regret bounded by T2/3C1/3log(|Θ|)1/2.