The True Sample Complexity of Identifying Good Arms

Julian Katz-Samuels, Kevin Jamieson
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1781-1791, 2020.

Abstract

We consider two multi-armed bandit problems with $n$ arms: \emph{(i)} given an $\epsilon > 0$, identify an arm with mean that is within $\epsilon$ of the largest mean and \emph{(ii)} given a threshold $\mu_0$ and integer $k$, identify $k$ arms with means larger than $\mu_0$. Existing lower bounds and algorithms for the PAC framework suggest that both of these problems require $\Omega(n)$ samples. However, we argue that the PAC framework not only conflicts with how these algorithms are used in practice, but also that these results disagree with intuition that says \emph{(i)} requires only $\Theta(\frac{n}{m})$ samples where $m = |\{ i : \mu_i > \max_{j \in [n]} \mu_j - \epsilon\}|$ and \emph{(ii)} requires $\Theta(\frac{n}{m}k)$ samples where $m = |\{ i : \mu_i > \mu_0 \}|$. We provide definitions that formalize these intuitions, obtain lower bounds that match the above sample complexities, and develop explicit, practical algorithms that achieve nearly matching upper bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-katz-samuels20a, title = {The True Sample Complexity of Identifying Good Arms}, author = {Katz-Samuels, Julian and Jamieson, Kevin}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1781--1791}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/katz-samuels20a/katz-samuels20a.pdf}, url = {https://proceedings.mlr.press/v108/katz-samuels20a.html}, abstract = {We consider two multi-armed bandit problems with $n$ arms: \emph{(i)} given an $\epsilon > 0$, identify an arm with mean that is within $\epsilon$ of the largest mean and \emph{(ii)} given a threshold $\mu_0$ and integer $k$, identify $k$ arms with means larger than $\mu_0$. Existing lower bounds and algorithms for the PAC framework suggest that both of these problems require $\Omega(n)$ samples. However, we argue that the PAC framework not only conflicts with how these algorithms are used in practice, but also that these results disagree with intuition that says \emph{(i)} requires only $\Theta(\frac{n}{m})$ samples where $m = |\{ i : \mu_i > \max_{j \in [n]} \mu_j - \epsilon\}|$ and \emph{(ii)} requires $\Theta(\frac{n}{m}k)$ samples where $m = |\{ i : \mu_i > \mu_0 \}|$. We provide definitions that formalize these intuitions, obtain lower bounds that match the above sample complexities, and develop explicit, practical algorithms that achieve nearly matching upper bounds.} }
Endnote
%0 Conference Paper %T The True Sample Complexity of Identifying Good Arms %A Julian Katz-Samuels %A Kevin Jamieson %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-katz-samuels20a %I PMLR %P 1781--1791 %U https://proceedings.mlr.press/v108/katz-samuels20a.html %V 108 %X We consider two multi-armed bandit problems with $n$ arms: \emph{(i)} given an $\epsilon > 0$, identify an arm with mean that is within $\epsilon$ of the largest mean and \emph{(ii)} given a threshold $\mu_0$ and integer $k$, identify $k$ arms with means larger than $\mu_0$. Existing lower bounds and algorithms for the PAC framework suggest that both of these problems require $\Omega(n)$ samples. However, we argue that the PAC framework not only conflicts with how these algorithms are used in practice, but also that these results disagree with intuition that says \emph{(i)} requires only $\Theta(\frac{n}{m})$ samples where $m = |\{ i : \mu_i > \max_{j \in [n]} \mu_j - \epsilon\}|$ and \emph{(ii)} requires $\Theta(\frac{n}{m}k)$ samples where $m = |\{ i : \mu_i > \mu_0 \}|$. We provide definitions that formalize these intuitions, obtain lower bounds that match the above sample complexities, and develop explicit, practical algorithms that achieve nearly matching upper bounds.
APA
Katz-Samuels, J. & Jamieson, K.. (2020). The True Sample Complexity of Identifying Good Arms. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1781-1791 Available from https://proceedings.mlr.press/v108/katz-samuels20a.html.

Related Material