Towards Optimal Algorithms for Multi-Player Bandits without Collision Sensing Information
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1990-2012, 2022.
We propose a novel algorithm for multi-player multi-armed bandits without collision sensing information. Our algorithm circumvents two problems shared by all state-of-the-art algorithms: it does not need as an input a lower bound on the minimal expected reward of an arm, and its performance does not scale inversely proportionally to the minimal expected reward. We prove a theoretical regret upper bound to justify these claims. We complement our theoretical results with numerical experiments, showing that the proposed algorithm outperforms state-of-the-art in practice.