Tight Lower Bounds for Combinatorial Multi-Armed Bandits

Nadav Merlis, Shie Mannor
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:2830-2857, 2020.

Abstract

The Combinatorial Multi-Armed Bandit problem is a sequential decision-making problem in which an agent selects a set of arms on each round, observes feedback for each of these arms and aims to maximize a known reward function of the arms it chose. While previous work proved regret upper bounds in this setting for general reward functions, only a few works provided matching lower bounds, all for specific reward functions. In this work, we prove regret lower bounds for combinatorial bandits that hold under mild assumptions for all smooth reward functions. We derive both problem-dependent and problem-independent bounds and show that the recently proposed Gini-weighted smoothness parameter (Merlis and Mannor, 2019) also determines the lower bounds for monotone reward functions. Notably, this implies that our lower bounds are tight up to log-factors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-merlis20a, title = {Tight Lower Bounds for Combinatorial Multi-Armed Bandits}, author = {Merlis, Nadav and Mannor, Shie}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {2830--2857}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/merlis20a/merlis20a.pdf}, url = {https://proceedings.mlr.press/v125/merlis20a.html}, abstract = { The Combinatorial Multi-Armed Bandit problem is a sequential decision-making problem in which an agent selects a set of arms on each round, observes feedback for each of these arms and aims to maximize a known reward function of the arms it chose. While previous work proved regret upper bounds in this setting for general reward functions, only a few works provided matching lower bounds, all for specific reward functions. In this work, we prove regret lower bounds for combinatorial bandits that hold under mild assumptions for all smooth reward functions. We derive both problem-dependent and problem-independent bounds and show that the recently proposed Gini-weighted smoothness parameter (Merlis and Mannor, 2019) also determines the lower bounds for monotone reward functions. Notably, this implies that our lower bounds are tight up to log-factors.} }
Endnote
%0 Conference Paper %T Tight Lower Bounds for Combinatorial Multi-Armed Bandits %A Nadav Merlis %A Shie Mannor %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-merlis20a %I PMLR %P 2830--2857 %U https://proceedings.mlr.press/v125/merlis20a.html %V 125 %X The Combinatorial Multi-Armed Bandit problem is a sequential decision-making problem in which an agent selects a set of arms on each round, observes feedback for each of these arms and aims to maximize a known reward function of the arms it chose. While previous work proved regret upper bounds in this setting for general reward functions, only a few works provided matching lower bounds, all for specific reward functions. In this work, we prove regret lower bounds for combinatorial bandits that hold under mild assumptions for all smooth reward functions. We derive both problem-dependent and problem-independent bounds and show that the recently proposed Gini-weighted smoothness parameter (Merlis and Mannor, 2019) also determines the lower bounds for monotone reward functions. Notably, this implies that our lower bounds are tight up to log-factors.
APA
Merlis, N. & Mannor, S.. (2020). Tight Lower Bounds for Combinatorial Multi-Armed Bandits. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:2830-2857 Available from https://proceedings.mlr.press/v125/merlis20a.html.

Related Material