Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem

Alexandra Carpentier, Andrea Locatelli
; 29th Annual Conference on Learning Theory, PMLR 49:590-604, 2016.

Abstract

We consider the problem of \textitbest arm identification with a \textitfixed budget T, in the K-armed stochastic bandit setting, with arms distribution defined on [0,1]. We prove that any bandit strategy, for at least one bandit problem characterized by a complexity H, will misidentify the best arm with probability lower bounded by $\exp\Big(-\frac{T}\log(K)H\Big)$, where $H$ is the sum for all sub-optimal arms of the inverse of the squared gaps. Our result disproves formally the general belief - coming from results in the fixed confidence setting - that there must exist an algorithm for this problem whose probability of error is upper bounded by $\exp(-T/H)$. This also proves that some existing strategies based on the Successive Rejection of the arms are optimal - closing therefore the current gap between upper and lower bounds for the fixed budget best arm identification problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v49-carpentier16, title = {Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem}, author = {Alexandra Carpentier and Andrea Locatelli}, booktitle = {29th Annual Conference on Learning Theory}, pages = {590--604}, year = {2016}, editor = {Vitaly Feldman and Alexander Rakhlin and Ohad Shamir}, volume = {49}, series = {Proceedings of Machine Learning Research}, address = {Columbia University, New York, New York, USA}, month = {23--26 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v49/carpentier16.pdf}, url = {http://proceedings.mlr.press/v49/carpentier16.html}, abstract = {We consider the problem of \textitbest arm identification with a \textitfixed budget T, in the K-armed stochastic bandit setting, with arms distribution defined on [0,1]. We prove that any bandit strategy, for at least one bandit problem characterized by a complexity H, will misidentify the best arm with probability lower bounded by $\exp\Big(-\frac{T}\log(K)H\Big)$, where $H$ is the sum for all sub-optimal arms of the inverse of the squared gaps. Our result disproves formally the general belief - coming from results in the fixed confidence setting - that there must exist an algorithm for this problem whose probability of error is upper bounded by $\exp(-T/H)$. This also proves that some existing strategies based on the Successive Rejection of the arms are optimal - closing therefore the current gap between upper and lower bounds for the fixed budget best arm identification problem.} }
Endnote
%0 Conference Paper %T Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem %A Alexandra Carpentier %A Andrea Locatelli %B 29th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2016 %E Vitaly Feldman %E Alexander Rakhlin %E Ohad Shamir %F pmlr-v49-carpentier16 %I PMLR %J Proceedings of Machine Learning Research %P 590--604 %U http://proceedings.mlr.press %V 49 %W PMLR %X We consider the problem of \textitbest arm identification with a \textitfixed budget T, in the K-armed stochastic bandit setting, with arms distribution defined on [0,1]. We prove that any bandit strategy, for at least one bandit problem characterized by a complexity H, will misidentify the best arm with probability lower bounded by $\exp\Big(-\frac{T}\log(K)H\Big)$, where $H$ is the sum for all sub-optimal arms of the inverse of the squared gaps. Our result disproves formally the general belief - coming from results in the fixed confidence setting - that there must exist an algorithm for this problem whose probability of error is upper bounded by $\exp(-T/H)$. This also proves that some existing strategies based on the Successive Rejection of the arms are optimal - closing therefore the current gap between upper and lower bounds for the fixed budget best arm identification problem.
RIS
TY - CPAPER TI - Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem AU - Alexandra Carpentier AU - Andrea Locatelli BT - 29th Annual Conference on Learning Theory PY - 2016/06/06 DA - 2016/06/06 ED - Vitaly Feldman ED - Alexander Rakhlin ED - Ohad Shamir ID - pmlr-v49-carpentier16 PB - PMLR SP - 590 DP - PMLR EP - 604 L1 - http://proceedings.mlr.press/v49/carpentier16.pdf UR - http://proceedings.mlr.press/v49/carpentier16.html AB - We consider the problem of \textitbest arm identification with a \textitfixed budget T, in the K-armed stochastic bandit setting, with arms distribution defined on [0,1]. We prove that any bandit strategy, for at least one bandit problem characterized by a complexity H, will misidentify the best arm with probability lower bounded by $\exp\Big(-\frac{T}\log(K)H\Big)$, where $H$ is the sum for all sub-optimal arms of the inverse of the squared gaps. Our result disproves formally the general belief - coming from results in the fixed confidence setting - that there must exist an algorithm for this problem whose probability of error is upper bounded by $\exp(-T/H)$. This also proves that some existing strategies based on the Successive Rejection of the arms are optimal - closing therefore the current gap between upper and lower bounds for the fixed budget best arm identification problem. ER -
APA
Carpentier, A. & Locatelli, A.. (2016). Tight (Lower) Bounds for the Fixed Budget Best Arm Identification Bandit Problem. 29th Annual Conference on Learning Theory, in PMLR 49:590-604

Related Material