Analysis of Thompson Sampling for the Multi-armed Bandit Problem

Shipra Agrawal, Navin Goyal
Proceedings of the 25th Annual Conference on Learning Theory, PMLR 23:39.1-39.26, 2012.

Abstract

The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the stochastic multi-armed bandit problem. More precisely, for the stochastic two-armed bandit problem, the expected regret in time T is O(\frac\ln T∆ + \frac1∆^3). And, for the stochastic N-armed bandit problem, the expected regret in time T is O(\left[\left(\sum_i=2^N \frac1\Delta_i^2\right)^2\right] \ln T). Our bounds are optimal but for the dependence on \Delta_i and the constant factors in big-Oh.

Cite this Paper


BibTeX
@InProceedings{pmlr-v23-agrawal12, title = {Analysis of Thompson Sampling for the Multi-armed Bandit Problem}, author = {Agrawal, Shipra and Goyal, Navin}, booktitle = {Proceedings of the 25th Annual Conference on Learning Theory}, pages = {39.1--39.26}, year = {2012}, editor = {Mannor, Shie and Srebro, Nathan and Williamson, Robert C.}, volume = {23}, series = {Proceedings of Machine Learning Research}, address = {Edinburgh, Scotland}, month = {25--27 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v23/agrawal12/agrawal12.pdf}, url = {https://proceedings.mlr.press/v23/agrawal12.html}, abstract = {The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the stochastic multi-armed bandit problem. More precisely, for the stochastic two-armed bandit problem, the expected regret in time T is O(\frac\ln T∆ + \frac1∆^3). And, for the stochastic N-armed bandit problem, the expected regret in time T is O(\left[\left(\sum_i=2^N \frac1\Delta_i^2\right)^2\right] \ln T). Our bounds are optimal but for the dependence on \Delta_i and the constant factors in big-Oh.} }
Endnote
%0 Conference Paper %T Analysis of Thompson Sampling for the Multi-armed Bandit Problem %A Shipra Agrawal %A Navin Goyal %B Proceedings of the 25th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2012 %E Shie Mannor %E Nathan Srebro %E Robert C. Williamson %F pmlr-v23-agrawal12 %I PMLR %P 39.1--39.26 %U https://proceedings.mlr.press/v23/agrawal12.html %V 23 %X The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the stochastic multi-armed bandit problem. More precisely, for the stochastic two-armed bandit problem, the expected regret in time T is O(\frac\ln T∆ + \frac1∆^3). And, for the stochastic N-armed bandit problem, the expected regret in time T is O(\left[\left(\sum_i=2^N \frac1\Delta_i^2\right)^2\right] \ln T). Our bounds are optimal but for the dependence on \Delta_i and the constant factors in big-Oh.
RIS
TY - CPAPER TI - Analysis of Thompson Sampling for the Multi-armed Bandit Problem AU - Shipra Agrawal AU - Navin Goyal BT - Proceedings of the 25th Annual Conference on Learning Theory DA - 2012/06/16 ED - Shie Mannor ED - Nathan Srebro ED - Robert C. Williamson ID - pmlr-v23-agrawal12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 23 SP - 39.1 EP - 39.26 L1 - http://proceedings.mlr.press/v23/agrawal12/agrawal12.pdf UR - https://proceedings.mlr.press/v23/agrawal12.html AB - The multi-armed bandit problem is a popular model for studying exploration/exploitation trade-off in sequential decision problems. Many algorithms are now available for this well-studied problem. One of the earliest algorithms, given by W. R. Thompson, dates back to 1933. This algorithm, referred to as Thompson Sampling, is a natural Bayesian algorithm. The basic idea is to choose an arm to play according to its probability of being the best arm. Thompson Sampling algorithm has experimentally been shown to be close to optimal. In addition, it is efficient to implement and exhibits several desirable properties such as small regret for delayed feedback. However, theoretical understanding of this algorithm was quite limited. In this paper, for the first time, we show that Thompson Sampling algorithm achieves logarithmic expected regret for the stochastic multi-armed bandit problem. More precisely, for the stochastic two-armed bandit problem, the expected regret in time T is O(\frac\ln T∆ + \frac1∆^3). And, for the stochastic N-armed bandit problem, the expected regret in time T is O(\left[\left(\sum_i=2^N \frac1\Delta_i^2\right)^2\right] \ln T). Our bounds are optimal but for the dependence on \Delta_i and the constant factors in big-Oh. ER -
APA
Agrawal, S. & Goyal, N.. (2012). Analysis of Thompson Sampling for the Multi-armed Bandit Problem. Proceedings of the 25th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 23:39.1-39.26 Available from https://proceedings.mlr.press/v23/agrawal12.html.

Related Material