On the Performance of Thompson Sampling on Logistic Bandits

Shi Dong, Tengyu Ma, Benjamin Van Roy
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1158-1160, 2019.

Abstract

We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\Delta)$ bound that applies more broadly, where $\Delta$ is the worst-case optimal log-odds and $\eta$ is the “fragility dimension,” a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon > 0$, no algorithm can achieve $\mathrm{poly}(d, 1/\Delta)\cdot T^{1-\epsilon}$ regret.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-dong19a, title = {On the Performance of Thompson Sampling on Logistic Bandits}, author = {Dong, Shi and Ma, Tengyu and Van Roy, Benjamin}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {1158--1160}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/dong19a/dong19a.pdf}, url = {https://proceedings.mlr.press/v99/dong19a.html}, abstract = {We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\Delta)$ bound that applies more broadly, where $\Delta$ is the worst-case optimal log-odds and $\eta$ is the “fragility dimension,” a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon > 0$, no algorithm can achieve $\mathrm{poly}(d, 1/\Delta)\cdot T^{1-\epsilon}$ regret.} }
Endnote
%0 Conference Paper %T On the Performance of Thompson Sampling on Logistic Bandits %A Shi Dong %A Tengyu Ma %A Benjamin Van Roy %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-dong19a %I PMLR %P 1158--1160 %U https://proceedings.mlr.press/v99/dong19a.html %V 99 %X We study the logistic bandit, in which rewards are binary with success probability $\exp(\beta a^\top \theta) / (1 + \exp(\beta a^\top \theta))$ and actions $a$ and coefficients $\theta$ are within the $d$-dimensional unit ball. While prior regret bounds for algorithms that address the logistic bandit exhibit exponential dependence on the slope parameter $\beta$, we establish a regret bound for Thompson sampling that is independent of $\beta$. Specifically, we establish that, when the set of feasible actions is identical to the set of possible coefficient vectors, the Bayesian regret of Thompson sampling is $\tilde{O}(d\sqrt{T})$. We also establish a $\tilde{O}(\sqrt{d\eta T}/\Delta)$ bound that applies more broadly, where $\Delta$ is the worst-case optimal log-odds and $\eta$ is the “fragility dimension,” a new statistic we define to capture the degree to which an optimal action for one model fails to satisfice for others. We demonstrate that the fragility dimension plays an essential role by showing that, for any $\epsilon > 0$, no algorithm can achieve $\mathrm{poly}(d, 1/\Delta)\cdot T^{1-\epsilon}$ regret.
APA
Dong, S., Ma, T. & Van Roy, B.. (2019). On the Performance of Thompson Sampling on Logistic Bandits. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:1158-1160 Available from https://proceedings.mlr.press/v99/dong19a.html.

Related Material