Simple regret for infinitely many armed bandits

Alexandra Carpentier, Michal Valko
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:1133-1141, 2015.

Abstract

We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter βcharacterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a \log(n) factor. We also provide extensions to several important cases: when βis unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-carpentier15, title = {Simple regret for infinitely many armed bandits}, author = {Carpentier, Alexandra and Valko, Michal}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {1133--1141}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/carpentier15.pdf}, url = { http://proceedings.mlr.press/v37/carpentier15.html }, abstract = {We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter βcharacterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a \log(n) factor. We also provide extensions to several important cases: when βis unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.} }
Endnote
%0 Conference Paper %T Simple regret for infinitely many armed bandits %A Alexandra Carpentier %A Michal Valko %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-carpentier15 %I PMLR %P 1133--1141 %U http://proceedings.mlr.press/v37/carpentier15.html %V 37 %X We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter βcharacterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a \log(n) factor. We also provide extensions to several important cases: when βis unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon.
RIS
TY - CPAPER TI - Simple regret for infinitely many armed bandits AU - Alexandra Carpentier AU - Michal Valko BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-carpentier15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 1133 EP - 1141 L1 - http://proceedings.mlr.press/v37/carpentier15.pdf UR - http://proceedings.mlr.press/v37/carpentier15.html AB - We consider a stochastic bandit problem with infinitely many arms. In this setting, the learner has no chance of trying all the arms even once and has to dedicate its limited number of samples only to a certain number of arms. All previous algorithms for this setting were designed for minimizing the cumulative regret of the learner. In this paper, we propose an algorithm aiming at minimizing the simple regret. As in the cumulative regret setting of infinitely many armed bandits, the rate of the simple regret will depend on a parameter βcharacterizing the distribution of the near-optimal arms. We prove that depending on β, our algorithm is minimax optimal either up to a multiplicative constant or up to a \log(n) factor. We also provide extensions to several important cases: when βis unknown, in a natural setting where the near-optimal arms have a small variance, and in the case of unknown time horizon. ER -
APA
Carpentier, A. & Valko, M.. (2015). Simple regret for infinitely many armed bandits. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:1133-1141 Available from http://proceedings.mlr.press/v37/carpentier15.html .

Related Material