A Novel Confidence-Based Algorithm for Structured Bandits


Andrea Tirinzoni, Alessandro Lazaric, Marcello Restelli ;
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3175-3185, 2020.


We study finite-armed stochastic bandits where the rewards of each arm might be correlated to those of other arms. We introduce a novel phased algorithm that exploits the given structure to build confidence sets over the parameters of the true bandit problem and rapidly discard all sub-optimal arms. In particular, unlike standard bandit algorithms with no structure, we show that the number of times a suboptimal arm is selected may actually be reduced thanks to the information collected by pulling other arms. Furthermore, we show that, in some structures, the regret of an anytime extension of our algorithm is uniformly bounded over time. For these constant-regret structures, we also derive a matching lower bound. Finally, we demonstrate numerically that our approach better exploits certain structures than existing methods.

Related Material