Contextual Bandits with Stochastic Experts

Rajat Sen, Karthikeyan Shanmugam, Sanjay Shakkottai
Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, PMLR 84:852-861, 2018.

Abstract

We consider the problem of contextual bandits with stochastic experts, which is a variation of the traditional stochastic contextual bandit with experts problem. In our problem setting, we assume access to a class of stochastic experts, where each expert is a conditional distribution over the arms given a context. We propose upper-confidence bound (UCB) algorithms for this problem, which employ two different importance sampling based estimators for the mean reward for each expert. Both these estimators leverage information leakage among the experts, thus using samples collected under all the experts to estimate the mean reward of any given expert. This leads to instance dependent regret bounds of $\mathcal{O}\left(λ(\pmb{μ})\mathcal{M}\log T/∆\right)$, where $λ(\pmb{μ})$ is a term that depends on the mean rewards of the experts, $∆$ is the smallest gap between the mean reward of the optimal expert and the rest, and $\mathcal{M}$ quantifies the information leakage among the experts. We show that under some assumptions $λ(\pmb{μ})$ is typically $\mathcal{O}(\log N)$. We implement our algorithm with stochastic experts generated from cost-sensitive classification oracles and show superior empirical performance on real-world datasets, when compared to other state of the art contextual bandit algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v84-sen18a, title = {Contextual Bandits with Stochastic Experts}, author = {Sen, Rajat and Shanmugam, Karthikeyan and Shakkottai, Sanjay}, booktitle = {Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics}, pages = {852--861}, year = {2018}, editor = {Storkey, Amos and Perez-Cruz, Fernando}, volume = {84}, series = {Proceedings of Machine Learning Research}, month = {09--11 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v84/sen18a/sen18a.pdf}, url = {https://proceedings.mlr.press/v84/sen18a.html}, abstract = {We consider the problem of contextual bandits with stochastic experts, which is a variation of the traditional stochastic contextual bandit with experts problem. In our problem setting, we assume access to a class of stochastic experts, where each expert is a conditional distribution over the arms given a context. We propose upper-confidence bound (UCB) algorithms for this problem, which employ two different importance sampling based estimators for the mean reward for each expert. Both these estimators leverage information leakage among the experts, thus using samples collected under all the experts to estimate the mean reward of any given expert. This leads to instance dependent regret bounds of $\mathcal{O}\left(λ(\pmb{μ})\mathcal{M}\log T/∆\right)$, where $λ(\pmb{μ})$ is a term that depends on the mean rewards of the experts, $∆$ is the smallest gap between the mean reward of the optimal expert and the rest, and $\mathcal{M}$ quantifies the information leakage among the experts. We show that under some assumptions $λ(\pmb{μ})$ is typically $\mathcal{O}(\log N)$. We implement our algorithm with stochastic experts generated from cost-sensitive classification oracles and show superior empirical performance on real-world datasets, when compared to other state of the art contextual bandit algorithms.} }
Endnote
%0 Conference Paper %T Contextual Bandits with Stochastic Experts %A Rajat Sen %A Karthikeyan Shanmugam %A Sanjay Shakkottai %B Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2018 %E Amos Storkey %E Fernando Perez-Cruz %F pmlr-v84-sen18a %I PMLR %P 852--861 %U https://proceedings.mlr.press/v84/sen18a.html %V 84 %X We consider the problem of contextual bandits with stochastic experts, which is a variation of the traditional stochastic contextual bandit with experts problem. In our problem setting, we assume access to a class of stochastic experts, where each expert is a conditional distribution over the arms given a context. We propose upper-confidence bound (UCB) algorithms for this problem, which employ two different importance sampling based estimators for the mean reward for each expert. Both these estimators leverage information leakage among the experts, thus using samples collected under all the experts to estimate the mean reward of any given expert. This leads to instance dependent regret bounds of $\mathcal{O}\left(λ(\pmb{μ})\mathcal{M}\log T/∆\right)$, where $λ(\pmb{μ})$ is a term that depends on the mean rewards of the experts, $∆$ is the smallest gap between the mean reward of the optimal expert and the rest, and $\mathcal{M}$ quantifies the information leakage among the experts. We show that under some assumptions $λ(\pmb{μ})$ is typically $\mathcal{O}(\log N)$. We implement our algorithm with stochastic experts generated from cost-sensitive classification oracles and show superior empirical performance on real-world datasets, when compared to other state of the art contextual bandit algorithms.
APA
Sen, R., Shanmugam, K. & Shakkottai, S.. (2018). Contextual Bandits with Stochastic Experts. Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 84:852-861 Available from https://proceedings.mlr.press/v84/sen18a.html.

Related Material