Thompson Sampling with Diffusion Generative Prior

Yu-Guan Hsieh, Shiva Kasiviswanathan, Branislav Kveton, Patrick Blöbaum
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:13434-13468, 2023.

Abstract

In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems. We specifically focus on bandit meta-learning, aiming to learn a policy that performs well across bandit tasks of a same class. To this end, we train a diffusion model that learns the underlying task distribution and combine Thompson sampling with the learned prior to deal with new tasks at test time. Our posterior sampling algorithm carefully balances between the learned prior and the noisy observations that come from the learner’s interaction with the environment. To capture realistic bandit scenarios, we propose a novel diffusion model training procedure that trains from incomplete and noisy data, which could be of independent interest. Finally, our extensive experiments clearly demonstrate the potential of the proposed approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-hsieh23a, title = {Thompson Sampling with Diffusion Generative Prior}, author = {Hsieh, Yu-Guan and Kasiviswanathan, Shiva and Kveton, Branislav and Bl\"{o}baum, Patrick}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {13434--13468}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/hsieh23a/hsieh23a.pdf}, url = {https://proceedings.mlr.press/v202/hsieh23a.html}, abstract = {In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems. We specifically focus on bandit meta-learning, aiming to learn a policy that performs well across bandit tasks of a same class. To this end, we train a diffusion model that learns the underlying task distribution and combine Thompson sampling with the learned prior to deal with new tasks at test time. Our posterior sampling algorithm carefully balances between the learned prior and the noisy observations that come from the learner’s interaction with the environment. To capture realistic bandit scenarios, we propose a novel diffusion model training procedure that trains from incomplete and noisy data, which could be of independent interest. Finally, our extensive experiments clearly demonstrate the potential of the proposed approach.} }
Endnote
%0 Conference Paper %T Thompson Sampling with Diffusion Generative Prior %A Yu-Guan Hsieh %A Shiva Kasiviswanathan %A Branislav Kveton %A Patrick Blöbaum %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-hsieh23a %I PMLR %P 13434--13468 %U https://proceedings.mlr.press/v202/hsieh23a.html %V 202 %X In this work, we initiate the idea of using denoising diffusion models to learn priors for online decision making problems. We specifically focus on bandit meta-learning, aiming to learn a policy that performs well across bandit tasks of a same class. To this end, we train a diffusion model that learns the underlying task distribution and combine Thompson sampling with the learned prior to deal with new tasks at test time. Our posterior sampling algorithm carefully balances between the learned prior and the noisy observations that come from the learner’s interaction with the environment. To capture realistic bandit scenarios, we propose a novel diffusion model training procedure that trains from incomplete and noisy data, which could be of independent interest. Finally, our extensive experiments clearly demonstrate the potential of the proposed approach.
APA
Hsieh, Y., Kasiviswanathan, S., Kveton, B. & Blöbaum, P.. (2023). Thompson Sampling with Diffusion Generative Prior. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:13434-13468 Available from https://proceedings.mlr.press/v202/hsieh23a.html.

Related Material