[edit]
The Pareto Frontier of Instance-Dependent Guarantees in Multi-Player Multi-Armed Bandits with no Communication
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3094-3094, 2022.
Abstract
We study the stochastic multi-player multi-armed bandit problem. In this problem, there are $m$ players and $K > m$ arms and the players cooperate to maximize their total reward. However the players cannot communicate and are penalized (e.g. receive no reward) if they pull the same arm at the same time. We ask whether it is possible to obtain optimal instance-dependent regret $\wt{O}(1/\Delta)$ where $\Delta$ is the gap between the $m$-th and $m+1$-st best arms. Such guarantees were recently achieved by \cite{pacchiano2021instance, huang2021towards} in a model in which the players are able to implicitly communicate through intentional collisions. We show that with no communication at all, such guarantees are, surprisingly, not achievable. In fact, obtaining the optimal $\wt{O}(1/\Delta)$ regret for some regimes of $\Delta$ necessarily implies strictly sub-optimal regret in other regimes. Our main result is a complete characterization of the Pareto optimal instance-dependent trade-offs that are possible with no communication. Our algorithm generalizes that of \cite{bubeck2021cooperative} and enjoys the same strong no-collision property, while our lower bound is completely new.