Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits

Branislav Kveton, Csaba Szepesvari, Sharan Vaswani, Zheng Wen, Tor Lattimore, Mohammad Ghavamzadeh
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3601-3610, 2019.

Abstract

We propose a bandit algorithm that explores by randomizing its history of rewards. Specifically, it pulls the arm with the highest mean reward in a non-parametric bootstrap sample of its history with pseudo rewards. We design the pseudo rewards such that the bootstrap mean is optimistic with a sufficiently high probability. We call our algorithm Giro, which stands for garbage in, reward out. We analyze Giro in a Bernoulli bandit and derive a $O(K \Delta^{-1} \log n)$ bound on its $n$-round regret, where $\Delta$ is the difference in the expected rewards of the optimal and the best suboptimal arms, and $K$ is the number of arms. The main advantage of our exploration design is that it easily generalizes to structured problems. To show this, we propose contextual Giro with an arbitrary reward generalization model. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that it performs well.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-kveton19a, title = {Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits}, author = {Kveton, Branislav and Szepesvari, Csaba and Vaswani, Sharan and Wen, Zheng and Lattimore, Tor and Ghavamzadeh, Mohammad}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3601--3610}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/kveton19a/kveton19a.pdf}, url = {https://proceedings.mlr.press/v97/kveton19a.html}, abstract = {We propose a bandit algorithm that explores by randomizing its history of rewards. Specifically, it pulls the arm with the highest mean reward in a non-parametric bootstrap sample of its history with pseudo rewards. We design the pseudo rewards such that the bootstrap mean is optimistic with a sufficiently high probability. We call our algorithm Giro, which stands for garbage in, reward out. We analyze Giro in a Bernoulli bandit and derive a $O(K \Delta^{-1} \log n)$ bound on its $n$-round regret, where $\Delta$ is the difference in the expected rewards of the optimal and the best suboptimal arms, and $K$ is the number of arms. The main advantage of our exploration design is that it easily generalizes to structured problems. To show this, we propose contextual Giro with an arbitrary reward generalization model. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that it performs well.} }
Endnote
%0 Conference Paper %T Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits %A Branislav Kveton %A Csaba Szepesvari %A Sharan Vaswani %A Zheng Wen %A Tor Lattimore %A Mohammad Ghavamzadeh %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-kveton19a %I PMLR %P 3601--3610 %U https://proceedings.mlr.press/v97/kveton19a.html %V 97 %X We propose a bandit algorithm that explores by randomizing its history of rewards. Specifically, it pulls the arm with the highest mean reward in a non-parametric bootstrap sample of its history with pseudo rewards. We design the pseudo rewards such that the bootstrap mean is optimistic with a sufficiently high probability. We call our algorithm Giro, which stands for garbage in, reward out. We analyze Giro in a Bernoulli bandit and derive a $O(K \Delta^{-1} \log n)$ bound on its $n$-round regret, where $\Delta$ is the difference in the expected rewards of the optimal and the best suboptimal arms, and $K$ is the number of arms. The main advantage of our exploration design is that it easily generalizes to structured problems. To show this, we propose contextual Giro with an arbitrary reward generalization model. We evaluate Giro and its contextual variant on multiple synthetic and real-world problems, and observe that it performs well.
APA
Kveton, B., Szepesvari, C., Vaswani, S., Wen, Z., Lattimore, T. & Ghavamzadeh, M.. (2019). Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3601-3610 Available from https://proceedings.mlr.press/v97/kveton19a.html.

Related Material