Corralling Stochastic Bandit Algorithms

Raman Arora, Teodor Vanislavov Marinov, Mehryar Mohri
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2116-2124, 2021.

Abstract

We study the problem of corralling stochastic bandit algorithms, that is combining multiple bandit algorithms designed for a stochastic environment, with the goal of devising a corralling algorithm that performs almost as well as the best base algorithm. We give two general algorithms for this setting, which we show benefit from favorable regret guarantees. We show that the regret of the corralling algorithms is no worse than that of the best algorithm containing the arm with the highest reward, and depends on the gap between the highest reward and other rewards.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-arora21a, title = { Corralling Stochastic Bandit Algorithms }, author = {Arora, Raman and Vanislavov Marinov, Teodor and Mohri, Mehryar}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2116--2124}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/arora21a/arora21a.pdf}, url = {https://proceedings.mlr.press/v130/arora21a.html}, abstract = { We study the problem of corralling stochastic bandit algorithms, that is combining multiple bandit algorithms designed for a stochastic environment, with the goal of devising a corralling algorithm that performs almost as well as the best base algorithm. We give two general algorithms for this setting, which we show benefit from favorable regret guarantees. We show that the regret of the corralling algorithms is no worse than that of the best algorithm containing the arm with the highest reward, and depends on the gap between the highest reward and other rewards. } }
Endnote
%0 Conference Paper %T Corralling Stochastic Bandit Algorithms %A Raman Arora %A Teodor Vanislavov Marinov %A Mehryar Mohri %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-arora21a %I PMLR %P 2116--2124 %U https://proceedings.mlr.press/v130/arora21a.html %V 130 %X We study the problem of corralling stochastic bandit algorithms, that is combining multiple bandit algorithms designed for a stochastic environment, with the goal of devising a corralling algorithm that performs almost as well as the best base algorithm. We give two general algorithms for this setting, which we show benefit from favorable regret guarantees. We show that the regret of the corralling algorithms is no worse than that of the best algorithm containing the arm with the highest reward, and depends on the gap between the highest reward and other rewards.
APA
Arora, R., Vanislavov Marinov, T. & Mohri, M.. (2021). Corralling Stochastic Bandit Algorithms . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2116-2124 Available from https://proceedings.mlr.press/v130/arora21a.html.

Related Material