Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling

Cindy Trinh, Emilie Kaufmann, Claire Vernade, Richard Combes
Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:862-889, 2020.

Abstract

Stochastic Rank-One Bandits are a simple framework for regret minimization problems over rank-one matrices of arms. The initially proposed algorithms are proved to have logarithmic regret, but do not match the existing lower bound for this problem. We close this gap by first proving that rank-one bandits are a particular instance of unimodal bandits, and then providing a new analysis of Unimodal Thompson Sampling (UTS). We prove an asymptotically optimal regret bound on the frequentist regret of UTS and we support our claims with simulations showing the significant improvement of our method compared to the state-of-the-art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v117-trinh20a, title = {Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling}, author = {Trinh, Cindy and Kaufmann, Emilie and Vernade, Claire and Combes, Richard}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {862--889}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, month = {08 Feb--11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/trinh20a/trinh20a.pdf}, url = {http://proceedings.mlr.press/v117/trinh20a.html}, abstract = {Stochastic Rank-One Bandits are a simple framework for regret minimization problems over rank-one matrices of arms. The initially proposed algorithms are proved to have logarithmic regret, but do not match the existing lower bound for this problem. We close this gap by first proving that rank-one bandits are a particular instance of unimodal bandits, and then providing a new analysis of Unimodal Thompson Sampling (UTS). We prove an asymptotically optimal regret bound on the frequentist regret of UTS and we support our claims with simulations showing the significant improvement of our method compared to the state-of-the-art.} }
Endnote
%0 Conference Paper %T Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling %A Cindy Trinh %A Emilie Kaufmann %A Claire Vernade %A Richard Combes %B Proceedings of the 31st International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Aryeh Kontorovich %E Gergely Neu %F pmlr-v117-trinh20a %I PMLR %P 862--889 %U http://proceedings.mlr.press/v117/trinh20a.html %V 117 %X Stochastic Rank-One Bandits are a simple framework for regret minimization problems over rank-one matrices of arms. The initially proposed algorithms are proved to have logarithmic regret, but do not match the existing lower bound for this problem. We close this gap by first proving that rank-one bandits are a particular instance of unimodal bandits, and then providing a new analysis of Unimodal Thompson Sampling (UTS). We prove an asymptotically optimal regret bound on the frequentist regret of UTS and we support our claims with simulations showing the significant improvement of our method compared to the state-of-the-art.
APA
Trinh, C., Kaufmann, E., Vernade, C. & Combes, R.. (2020). Solving Bernoulli Rank-One Bandits with Unimodal Thompson Sampling. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:862-889 Available from http://proceedings.mlr.press/v117/trinh20a.html.

Related Material