$\mathttVITS$ : Variational Inference Thompson Sampling for contextual bandits

Pierre Clavier, Tom Huix, Alain Oliviero Durmus
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:9033-9075, 2024.

Abstract

In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits. At each round, traditional TS requires samples from the current posterior distribution, which is usually intractable. To circumvent this issue, approximate inference techniques can be used and provide samples with distribution close to the posteriors. However, current approximate techniques yield to either poor estimation (Laplace approximation) or can be computationally expensive (MCMC methods, Ensemble sampling...). In this paper, we propose a new algorithm, Varational Inference TS $\mathtt{VITS}$, based on Gaussian Variational Inference. This scheme provides powerful posterior approximations which are easy to sample from, and is computationally efficient, making it an ideal choice for TS. In addition, we show that $\mathtt{VITS}$ achieves a sub-linear regret bound of the same order in the dimension and number of round as traditional TS for linear contextual bandit. Finally, we demonstrate experimentally the effectiveness of $\mathtt{VITS}$ on both synthetic and real world datasets

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-clavier24a, title = {$\mathtt{VITS}$ : Variational Inference Thompson Sampling for contextual bandits}, author = {Clavier, Pierre and Huix, Tom and Oliviero Durmus, Alain}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {9033--9075}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/clavier24a/clavier24a.pdf}, url = {https://proceedings.mlr.press/v235/clavier24a.html}, abstract = {In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits. At each round, traditional TS requires samples from the current posterior distribution, which is usually intractable. To circumvent this issue, approximate inference techniques can be used and provide samples with distribution close to the posteriors. However, current approximate techniques yield to either poor estimation (Laplace approximation) or can be computationally expensive (MCMC methods, Ensemble sampling...). In this paper, we propose a new algorithm, Varational Inference TS $\mathtt{VITS}$, based on Gaussian Variational Inference. This scheme provides powerful posterior approximations which are easy to sample from, and is computationally efficient, making it an ideal choice for TS. In addition, we show that $\mathtt{VITS}$ achieves a sub-linear regret bound of the same order in the dimension and number of round as traditional TS for linear contextual bandit. Finally, we demonstrate experimentally the effectiveness of $\mathtt{VITS}$ on both synthetic and real world datasets} }
Endnote
%0 Conference Paper %T $\mathttVITS$ : Variational Inference Thompson Sampling for contextual bandits %A Pierre Clavier %A Tom Huix %A Alain Oliviero Durmus %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-clavier24a %I PMLR %P 9033--9075 %U https://proceedings.mlr.press/v235/clavier24a.html %V 235 %X In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits. At each round, traditional TS requires samples from the current posterior distribution, which is usually intractable. To circumvent this issue, approximate inference techniques can be used and provide samples with distribution close to the posteriors. However, current approximate techniques yield to either poor estimation (Laplace approximation) or can be computationally expensive (MCMC methods, Ensemble sampling...). In this paper, we propose a new algorithm, Varational Inference TS $\mathtt{VITS}$, based on Gaussian Variational Inference. This scheme provides powerful posterior approximations which are easy to sample from, and is computationally efficient, making it an ideal choice for TS. In addition, we show that $\mathtt{VITS}$ achieves a sub-linear regret bound of the same order in the dimension and number of round as traditional TS for linear contextual bandit. Finally, we demonstrate experimentally the effectiveness of $\mathtt{VITS}$ on both synthetic and real world datasets
APA
Clavier, P., Huix, T. & Oliviero Durmus, A.. (2024). $\mathttVITS$ : Variational Inference Thompson Sampling for contextual bandits. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:9033-9075 Available from https://proceedings.mlr.press/v235/clavier24a.html.

Related Material