Contextual Bandits with Latent Confounders: An NMF Approach

Rajat Sen, Karthikeyan Shanmugam, Murat Kocaoglu, Alex Dimakis, Sanjay Shakkottai
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:518-527, 2017.

Abstract

Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are $L$ observed contexts and $K$ arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality $m$ ($m ≪L,K$). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the $L \times K$ mean reward matrix $\mathbfU$ (for each context in $[L]$ and each arm in $[K]$) factorizes into non-negative factors $\mathbfA$ ($L \times m$) and $\mathbfW$ ($m \times K$). This insight enables us to propose an $ε$-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of $\mathcalO\left(L\mathrmpoly(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcalO(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of $\mathcalO\left(Km\log T\right)$. These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-sen17a, title = {{Contextual Bandits with Latent Confounders: An NMF Approach}}, author = {Sen, Rajat and Shanmugam, Karthikeyan and Kocaoglu, Murat and Dimakis, Alex and Shakkottai, Sanjay}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {518--527}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/sen17a/sen17a.pdf}, url = {https://proceedings.mlr.press/v54/sen17a.html}, abstract = {Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are $L$ observed contexts and $K$ arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality $m$ ($m ≪L,K$). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the $L \times K$ mean reward matrix $\mathbfU$ (for each context in $[L]$ and each arm in $[K]$) factorizes into non-negative factors $\mathbfA$ ($L \times m$) and $\mathbfW$ ($m \times K$). This insight enables us to propose an $ε$-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of $\mathcalO\left(L\mathrmpoly(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcalO(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of $\mathcalO\left(Km\log T\right)$. These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.} }
Endnote
%0 Conference Paper %T Contextual Bandits with Latent Confounders: An NMF Approach %A Rajat Sen %A Karthikeyan Shanmugam %A Murat Kocaoglu %A Alex Dimakis %A Sanjay Shakkottai %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-sen17a %I PMLR %P 518--527 %U https://proceedings.mlr.press/v54/sen17a.html %V 54 %X Motivated by online recommendation and advertising systems, we consider a causal model for stochastic contextual bandits with a latent low-dimensional confounder. In our model, there are $L$ observed contexts and $K$ arms of the bandit. The observed context influences the reward obtained through a latent confounder variable with cardinality $m$ ($m ≪L,K$). The arm choice and the latent confounder causally determines the reward while the observed context is correlated with the confounder. Under this model, the $L \times K$ mean reward matrix $\mathbfU$ (for each context in $[L]$ and each arm in $[K]$) factorizes into non-negative factors $\mathbfA$ ($L \times m$) and $\mathbfW$ ($m \times K$). This insight enables us to propose an $ε$-greedy NMF-Bandit algorithm that designs a sequence of interventions (selecting specific arms), that achieves a balance between learning this low-dimensional structure and selecting the best arm to minimize regret. Our algorithm achieves a regret of $\mathcalO\left(L\mathrmpoly(m, \log K) \log T \right)$ at time $T$, as compared to $\mathcalO(LK\log T)$ for conventional contextual bandits, assuming a constant gap between the best arm and the rest for each context. These guarantees are obtained under mild sufficiency conditions on the factors that are weaker versions of the well-known Statistical RIP condition. We further propose a class of generative models that satisfy our sufficient conditions, and derive a lower bound of $\mathcalO\left(Km\log T\right)$. These are the first regret guarantees for online matrix completion with bandit feedback, when the rank is greater than one. We further compare the performance of our algorithm with the state of the art, on synthetic and real world data-sets.
APA
Sen, R., Shanmugam, K., Kocaoglu, M., Dimakis, A. & Shakkottai, S.. (2017). Contextual Bandits with Latent Confounders: An NMF Approach. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:518-527 Available from https://proceedings.mlr.press/v54/sen17a.html.

Related Material