Thompson Sampling for Adversarial Bit Prediction

Yuval Lewi, Haim Kaplan, Yishay Mansour
Proceedings of the 31st International Conference on Algorithmic Learning Theory, PMLR 117:518-553, 2020.

Abstract

We study the Thompson sampling algorithm in an adversarial setting, specifically, for adversarial bit prediction. We characterize the bit sequences with the smallest and largest expected regret. Among sequences of length $T$ with $k < \frac{T}{2}$ zeros, the sequences of largest regret consist of alternating zeros and ones followed by the remaining ones, and the sequence of smallest regret consists of ones followed by zeros. We also bound the regret of those sequences, the worst case sequences have regret $O(\sqrt{T})$ and the best case sequence have regret $O(1)$. We extend our results to a model where false positive and false negative errors have different weights. We characterize the sequences with largest expected regret in this generalized setting, and derive their regret bounds. We also show that there are sequences with $O(1)$ regret.

Cite this Paper


BibTeX
@InProceedings{pmlr-v117-lewi20a, title = {Thompson Sampling for Adversarial Bit Prediction}, author = {Lewi, Yuval and Kaplan, Haim and Mansour, Yishay}, booktitle = {Proceedings of the 31st International Conference on Algorithmic Learning Theory}, pages = {518--553}, year = {2020}, editor = {Kontorovich, Aryeh and Neu, Gergely}, volume = {117}, series = {Proceedings of Machine Learning Research}, month = {08 Feb--11 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v117/lewi20a/lewi20a.pdf}, url = {https://proceedings.mlr.press/v117/lewi20a.html}, abstract = {We study the Thompson sampling algorithm in an adversarial setting, specifically, for adversarial bit prediction. We characterize the bit sequences with the smallest and largest expected regret. Among sequences of length $T$ with $k < \frac{T}{2}$ zeros, the sequences of largest regret consist of alternating zeros and ones followed by the remaining ones, and the sequence of smallest regret consists of ones followed by zeros. We also bound the regret of those sequences, the worst case sequences have regret $O(\sqrt{T})$ and the best case sequence have regret $O(1)$. We extend our results to a model where false positive and false negative errors have different weights. We characterize the sequences with largest expected regret in this generalized setting, and derive their regret bounds. We also show that there are sequences with $O(1)$ regret.} }
Endnote
%0 Conference Paper %T Thompson Sampling for Adversarial Bit Prediction %A Yuval Lewi %A Haim Kaplan %A Yishay Mansour %B Proceedings of the 31st International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Aryeh Kontorovich %E Gergely Neu %F pmlr-v117-lewi20a %I PMLR %P 518--553 %U https://proceedings.mlr.press/v117/lewi20a.html %V 117 %X We study the Thompson sampling algorithm in an adversarial setting, specifically, for adversarial bit prediction. We characterize the bit sequences with the smallest and largest expected regret. Among sequences of length $T$ with $k < \frac{T}{2}$ zeros, the sequences of largest regret consist of alternating zeros and ones followed by the remaining ones, and the sequence of smallest regret consists of ones followed by zeros. We also bound the regret of those sequences, the worst case sequences have regret $O(\sqrt{T})$ and the best case sequence have regret $O(1)$. We extend our results to a model where false positive and false negative errors have different weights. We characterize the sequences with largest expected regret in this generalized setting, and derive their regret bounds. We also show that there are sequences with $O(1)$ regret.
APA
Lewi, Y., Kaplan, H. & Mansour, Y.. (2020). Thompson Sampling for Adversarial Bit Prediction. Proceedings of the 31st International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 117:518-553 Available from https://proceedings.mlr.press/v117/lewi20a.html.

Related Material