Random Effect Bandits

Rong Zhu, Branislav Kveton
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:3091-3107, 2022.

Abstract

This paper studies regret minimization in a multi-armed bandit. It is well known that side information, such as the prior distribution of arm means in Thompson sampling, can improve the statistical efficiency of the bandit algorithm. While the prior is a blessing when correctly specified, it is a curse when misspecified. To address this issue, we introduce the assumption of a random-effect model to bandits. In this model, the mean arm rewards are drawn independently from an unknown distribution, which we estimate. We derive a random-effect estimator of the arm means, analyze its uncertainty, and design a UCB algorithm ReUCB that uses it. We analyze ReUCB and derive an upper bound on its n-round Bayes regret, which improves upon not using the random-effect structure. Our experiments show that ReUCB can outperform Thompson sampling, without knowing the prior distribution of arm means.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-zhu22b, title = { Random Effect Bandits }, author = {Zhu, Rong and Kveton, Branislav}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {3091--3107}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/zhu22b/zhu22b.pdf}, url = {https://proceedings.mlr.press/v151/zhu22b.html}, abstract = { This paper studies regret minimization in a multi-armed bandit. It is well known that side information, such as the prior distribution of arm means in Thompson sampling, can improve the statistical efficiency of the bandit algorithm. While the prior is a blessing when correctly specified, it is a curse when misspecified. To address this issue, we introduce the assumption of a random-effect model to bandits. In this model, the mean arm rewards are drawn independently from an unknown distribution, which we estimate. We derive a random-effect estimator of the arm means, analyze its uncertainty, and design a UCB algorithm ReUCB that uses it. We analyze ReUCB and derive an upper bound on its n-round Bayes regret, which improves upon not using the random-effect structure. Our experiments show that ReUCB can outperform Thompson sampling, without knowing the prior distribution of arm means. } }
Endnote
%0 Conference Paper %T Random Effect Bandits %A Rong Zhu %A Branislav Kveton %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-zhu22b %I PMLR %P 3091--3107 %U https://proceedings.mlr.press/v151/zhu22b.html %V 151 %X This paper studies regret minimization in a multi-armed bandit. It is well known that side information, such as the prior distribution of arm means in Thompson sampling, can improve the statistical efficiency of the bandit algorithm. While the prior is a blessing when correctly specified, it is a curse when misspecified. To address this issue, we introduce the assumption of a random-effect model to bandits. In this model, the mean arm rewards are drawn independently from an unknown distribution, which we estimate. We derive a random-effect estimator of the arm means, analyze its uncertainty, and design a UCB algorithm ReUCB that uses it. We analyze ReUCB and derive an upper bound on its n-round Bayes regret, which improves upon not using the random-effect structure. Our experiments show that ReUCB can outperform Thompson sampling, without knowing the prior distribution of arm means.
APA
Zhu, R. & Kveton, B.. (2022). Random Effect Bandits . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:3091-3107 Available from https://proceedings.mlr.press/v151/zhu22b.html.

Related Material