lil’ UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits

Kevin Jamieson, Matthew Malloy, Robert Nowak, Sébastien Bubeck
; Proceedings of The 27th Conference on Learning Theory, PMLR 35:423-439, 2014.

Abstract

The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples. The procedure cannot be improved in the sense that the number of samples required to identify the best arm is within a constant factor of a lower bound based on the law of the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence bounds to explicitly account for the infinite time horizon of the algorithm. In addition, by using a novel stopping time for the algorithm we avoid a union bound over the arms that has been observed in other UCB-type algorithms. We prove that the algorithm is optimal up to constants and also show through simulations that it provides superior performance with respect to the state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-jamieson14, title = {lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits}, author = {Kevin Jamieson and Matthew Malloy and Robert Nowak and Sébastien Bubeck}, pages = {423--439}, year = {2014}, editor = {Maria Florina Balcan and Vitaly Feldman and Csaba Szepesvári}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/jamieson14.pdf}, url = {http://proceedings.mlr.press/v35/jamieson14.html}, abstract = {The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples. The procedure cannot be improved in the sense that the number of samples required to identify the best arm is within a constant factor of a lower bound based on the law of the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence bounds to explicitly account for the infinite time horizon of the algorithm. In addition, by using a novel stopping time for the algorithm we avoid a union bound over the arms that has been observed in other UCB-type algorithms. We prove that the algorithm is optimal up to constants and also show through simulations that it provides superior performance with respect to the state-of-the-art. } }
Endnote
%0 Conference Paper %T lil’ UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits %A Kevin Jamieson %A Matthew Malloy %A Robert Nowak %A Sébastien Bubeck %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-jamieson14 %I PMLR %J Proceedings of Machine Learning Research %P 423--439 %U http://proceedings.mlr.press %V 35 %W PMLR %X The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples. The procedure cannot be improved in the sense that the number of samples required to identify the best arm is within a constant factor of a lower bound based on the law of the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence bounds to explicitly account for the infinite time horizon of the algorithm. In addition, by using a novel stopping time for the algorithm we avoid a union bound over the arms that has been observed in other UCB-type algorithms. We prove that the algorithm is optimal up to constants and also show through simulations that it provides superior performance with respect to the state-of-the-art.
RIS
TY - CPAPER TI - lil’ UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits AU - Kevin Jamieson AU - Matthew Malloy AU - Robert Nowak AU - Sébastien Bubeck BT - Proceedings of The 27th Conference on Learning Theory PY - 2014/05/29 DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-jamieson14 PB - PMLR SP - 423 DP - PMLR EP - 439 L1 - http://proceedings.mlr.press/v35/jamieson14.pdf UR - http://proceedings.mlr.press/v35/jamieson14.html AB - The paper proposes a novel upper confidence bound (UCB) procedure for identifying the arm with the largest mean in a multi-armed bandit game in the fixed confidence setting using a small number of total samples. The procedure cannot be improved in the sense that the number of samples required to identify the best arm is within a constant factor of a lower bound based on the law of the iterated logarithm (LIL). Inspired by the LIL, we construct our confidence bounds to explicitly account for the infinite time horizon of the algorithm. In addition, by using a novel stopping time for the algorithm we avoid a union bound over the arms that has been observed in other UCB-type algorithms. We prove that the algorithm is optimal up to constants and also show through simulations that it provides superior performance with respect to the state-of-the-art. ER -
APA
Jamieson, K., Malloy, M., Nowak, R. & Bubeck, S.. (2014). lil’ UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits. Proceedings of The 27th Conference on Learning Theory, in PMLR 35:423-439

Related Material