Max-Quantile Grouped Infinite-Arm Bandits

Ivan Lau, Yan Hao Ling, Mayank Shrivastava, Jonathan Scarlett
Proceedings of The 34th International Conference on Algorithmic Learning Theory, PMLR 201:909-945, 2023.

Abstract

In this paper, we consider a bandit problem in which there are a number of groups each consisting of infinitely many arms. Whenever a new arm is requested from a given group, its mean reward is drawn from an unknown reservoir distribution (different for each group), and the uncertainty in the arm’s mean reward can only be reduced via subsequent pulls of the arm. The goal is to identify the infinite-arm group whose reservoir distribution has the highest $(1-\alpha)$-quantile (e.g., median if $\alpha = \frac{1}{2}$), using as few total arm pulls as possible. We introduce a two-step algorithm that first requests a fixed number of arms from each group and then runs a finite-arm grouped max-quantile bandit algorithm. We characterize both the instance-dependent and worst-case regret, and provide a matching lower bound for the latter, while discussing various strengths, weaknesses, algorithmic improvements, and potential lower bounds associated with our instance-dependent upper bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v201-lau23a, title = {Max-Quantile Grouped Infinite-Arm Bandits}, author = {Lau, Ivan and Ling, Yan Hao and Shrivastava, Mayank and Scarlett, Jonathan}, booktitle = {Proceedings of The 34th International Conference on Algorithmic Learning Theory}, pages = {909--945}, year = {2023}, editor = {Agrawal, Shipra and Orabona, Francesco}, volume = {201}, series = {Proceedings of Machine Learning Research}, month = {20 Feb--23 Feb}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v201/lau23a/lau23a.pdf}, url = {https://proceedings.mlr.press/v201/lau23a.html}, abstract = {In this paper, we consider a bandit problem in which there are a number of groups each consisting of infinitely many arms. Whenever a new arm is requested from a given group, its mean reward is drawn from an unknown reservoir distribution (different for each group), and the uncertainty in the arm’s mean reward can only be reduced via subsequent pulls of the arm. The goal is to identify the infinite-arm group whose reservoir distribution has the highest $(1-\alpha)$-quantile (e.g., median if $\alpha = \frac{1}{2}$), using as few total arm pulls as possible. We introduce a two-step algorithm that first requests a fixed number of arms from each group and then runs a finite-arm grouped max-quantile bandit algorithm. We characterize both the instance-dependent and worst-case regret, and provide a matching lower bound for the latter, while discussing various strengths, weaknesses, algorithmic improvements, and potential lower bounds associated with our instance-dependent upper bounds.} }
Endnote
%0 Conference Paper %T Max-Quantile Grouped Infinite-Arm Bandits %A Ivan Lau %A Yan Hao Ling %A Mayank Shrivastava %A Jonathan Scarlett %B Proceedings of The 34th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Shipra Agrawal %E Francesco Orabona %F pmlr-v201-lau23a %I PMLR %P 909--945 %U https://proceedings.mlr.press/v201/lau23a.html %V 201 %X In this paper, we consider a bandit problem in which there are a number of groups each consisting of infinitely many arms. Whenever a new arm is requested from a given group, its mean reward is drawn from an unknown reservoir distribution (different for each group), and the uncertainty in the arm’s mean reward can only be reduced via subsequent pulls of the arm. The goal is to identify the infinite-arm group whose reservoir distribution has the highest $(1-\alpha)$-quantile (e.g., median if $\alpha = \frac{1}{2}$), using as few total arm pulls as possible. We introduce a two-step algorithm that first requests a fixed number of arms from each group and then runs a finite-arm grouped max-quantile bandit algorithm. We characterize both the instance-dependent and worst-case regret, and provide a matching lower bound for the latter, while discussing various strengths, weaknesses, algorithmic improvements, and potential lower bounds associated with our instance-dependent upper bounds.
APA
Lau, I., Ling, Y.H., Shrivastava, M. & Scarlett, J.. (2023). Max-Quantile Grouped Infinite-Arm Bandits. Proceedings of The 34th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 201:909-945 Available from https://proceedings.mlr.press/v201/lau23a.html.

Related Material