Maillard Sampling: Boltzmann Exploration Done Optimally

Jie Bian, Kwang-Sung Jun
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:54-72, 2022.

Abstract

The PhD thesis of Maillard (2013) presents a rather obscure algorithm for the $K$-armed bandit problem. This less-known algorithm, which we call Maillard sampling (MS), computes the probability of choosing each arm in a closed form, which is not true for Thompson sampling, a widely-adopted bandit algorithm in the industry. This means that the bandit-logged data from running MS can be readily used for counterfactual evaluation, unlike Thompson sampling. Motivated by such merit, we revisit MS and perform an improved analysis to show that it achieves both the asymptotical optimality and $\sqrt{KT\log{T}}$ minimax regret bound where $T$ is the time horizon, which matches the known bounds for asymptotically optimal UCB. We then propose a variant of MS called MS$^+$ that improves its minimax bound to $\sqrt{KT\log{K}}$. MS$^+$ can also be tuned to be aggressive (i.e., less exploration) without losing the asymptotic optimality, a unique feature unavailable from existing bandit algorithms. Our numerical evaluation shows the effectiveness of MS$^+$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-bian22a, title = { Maillard Sampling: Boltzmann Exploration Done Optimally }, author = {Bian, Jie and Jun, Kwang-Sung}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {54--72}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/bian22a/bian22a.pdf}, url = {https://proceedings.mlr.press/v151/bian22a.html}, abstract = { The PhD thesis of Maillard (2013) presents a rather obscure algorithm for the $K$-armed bandit problem. This less-known algorithm, which we call Maillard sampling (MS), computes the probability of choosing each arm in a closed form, which is not true for Thompson sampling, a widely-adopted bandit algorithm in the industry. This means that the bandit-logged data from running MS can be readily used for counterfactual evaluation, unlike Thompson sampling. Motivated by such merit, we revisit MS and perform an improved analysis to show that it achieves both the asymptotical optimality and $\sqrt{KT\log{T}}$ minimax regret bound where $T$ is the time horizon, which matches the known bounds for asymptotically optimal UCB. We then propose a variant of MS called MS$^+$ that improves its minimax bound to $\sqrt{KT\log{K}}$. MS$^+$ can also be tuned to be aggressive (i.e., less exploration) without losing the asymptotic optimality, a unique feature unavailable from existing bandit algorithms. Our numerical evaluation shows the effectiveness of MS$^+$. } }
Endnote
%0 Conference Paper %T Maillard Sampling: Boltzmann Exploration Done Optimally %A Jie Bian %A Kwang-Sung Jun %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-bian22a %I PMLR %P 54--72 %U https://proceedings.mlr.press/v151/bian22a.html %V 151 %X The PhD thesis of Maillard (2013) presents a rather obscure algorithm for the $K$-armed bandit problem. This less-known algorithm, which we call Maillard sampling (MS), computes the probability of choosing each arm in a closed form, which is not true for Thompson sampling, a widely-adopted bandit algorithm in the industry. This means that the bandit-logged data from running MS can be readily used for counterfactual evaluation, unlike Thompson sampling. Motivated by such merit, we revisit MS and perform an improved analysis to show that it achieves both the asymptotical optimality and $\sqrt{KT\log{T}}$ minimax regret bound where $T$ is the time horizon, which matches the known bounds for asymptotically optimal UCB. We then propose a variant of MS called MS$^+$ that improves its minimax bound to $\sqrt{KT\log{K}}$. MS$^+$ can also be tuned to be aggressive (i.e., less exploration) without losing the asymptotic optimality, a unique feature unavailable from existing bandit algorithms. Our numerical evaluation shows the effectiveness of MS$^+$.
APA
Bian, J. & Jun, K.. (2022). Maillard Sampling: Boltzmann Exploration Done Optimally . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:54-72 Available from https://proceedings.mlr.press/v151/bian22a.html.

Related Material