Old Dog Learns New Tricks: Randomized UCB for Bandit Problems

Sharan Vaswani, Abbas Mehrabian, Audrey Durand, Branislav Kveton
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1988-1998, 2020.

Abstract

We propose RandUCB, a bandit strategy that uses theoretically derived confidence intervals similar to upper confidence bound (UCB) algorithms, but akin to Thompson sampling (TS), uses randomization to trade off exploration and exploitation. In the $K$-armed bandit setting, we show that there are infinitely many variants of RandUCB, all of which achieve the minimax-optimal $\widetilde{O}(\sqrt{K T})$ regret after $T$ rounds. Moreover, in a specific multi-armed bandit setting, we show that both UCB and TS can be recovered as special cases of RandUCB. For structured bandits, where each arm is associated with a $d$-dimensional feature vector and rewards are distributed according to a linear or generalized linear model, we prove that RandUCB achieves the minimax-optimal $\widetilde{O}(d \sqrt{T})$ regret even in the case of infinite arms. We demonstrate the practical effectiveness of RandUCB with experiments in both multi-armed and structured bandit settings. We show that RandUCB matches the empirical performance of TS while matching the theoretically optimal bounds of UCB algorithms, thus achieving the best of both worlds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-vaswani20a, title = {Old Dog Learns New Tricks: Randomized UCB for Bandit Problems}, author = {Vaswani, Sharan and Mehrabian, Abbas and Durand, Audrey and Kveton, Branislav}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1988--1998}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/vaswani20a/vaswani20a.pdf}, url = {https://proceedings.mlr.press/v108/vaswani20a.html}, abstract = {We propose RandUCB, a bandit strategy that uses theoretically derived confidence intervals similar to upper confidence bound (UCB) algorithms, but akin to Thompson sampling (TS), uses randomization to trade off exploration and exploitation. In the $K$-armed bandit setting, we show that there are infinitely many variants of RandUCB, all of which achieve the minimax-optimal $\widetilde{O}(\sqrt{K T})$ regret after $T$ rounds. Moreover, in a specific multi-armed bandit setting, we show that both UCB and TS can be recovered as special cases of RandUCB. For structured bandits, where each arm is associated with a $d$-dimensional feature vector and rewards are distributed according to a linear or generalized linear model, we prove that RandUCB achieves the minimax-optimal $\widetilde{O}(d \sqrt{T})$ regret even in the case of infinite arms. We demonstrate the practical effectiveness of RandUCB with experiments in both multi-armed and structured bandit settings. We show that RandUCB matches the empirical performance of TS while matching the theoretically optimal bounds of UCB algorithms, thus achieving the best of both worlds. } }
Endnote
%0 Conference Paper %T Old Dog Learns New Tricks: Randomized UCB for Bandit Problems %A Sharan Vaswani %A Abbas Mehrabian %A Audrey Durand %A Branislav Kveton %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-vaswani20a %I PMLR %P 1988--1998 %U https://proceedings.mlr.press/v108/vaswani20a.html %V 108 %X We propose RandUCB, a bandit strategy that uses theoretically derived confidence intervals similar to upper confidence bound (UCB) algorithms, but akin to Thompson sampling (TS), uses randomization to trade off exploration and exploitation. In the $K$-armed bandit setting, we show that there are infinitely many variants of RandUCB, all of which achieve the minimax-optimal $\widetilde{O}(\sqrt{K T})$ regret after $T$ rounds. Moreover, in a specific multi-armed bandit setting, we show that both UCB and TS can be recovered as special cases of RandUCB. For structured bandits, where each arm is associated with a $d$-dimensional feature vector and rewards are distributed according to a linear or generalized linear model, we prove that RandUCB achieves the minimax-optimal $\widetilde{O}(d \sqrt{T})$ regret even in the case of infinite arms. We demonstrate the practical effectiveness of RandUCB with experiments in both multi-armed and structured bandit settings. We show that RandUCB matches the empirical performance of TS while matching the theoretically optimal bounds of UCB algorithms, thus achieving the best of both worlds.
APA
Vaswani, S., Mehrabian, A., Durand, A. & Kveton, B.. (2020). Old Dog Learns New Tricks: Randomized UCB for Bandit Problems. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1988-1998 Available from https://proceedings.mlr.press/v108/vaswani20a.html.

Related Material