Scalable Thompson Sampling via Optimal Transport

Ruiyi Zhang, Zheng Wen, Changyou Chen, Chen Fang, Tong Yu, Lawrence Carin
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:87-96, 2019.

Abstract

Thompson sampling (TS) is a class of algorithms for sequential decision-making, which requires maintaining a posterior distribution over a reward model. However, calculating exact posterior distributions is intractable for all but the simplest models. Consequently, how to computationally-efficiently approximate a posterior distribution is a crucial problem for scalable TS with complex models, such as neural networks. In this paper, we use distribution optimization techniques to approximate the posterior distribution, solved via Wasserstein gradient flows. Based on the framework, a principled particle-optimization algorithm is developed for TS to approximate the posterior efficiently. Our approach is scalable and does not make explicit distribution assumptions on posterior approximations. Extensive experiments on both synthetic data and large-scale real data demonstrate the superior performance of the proposed methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-zhang19a, title = {Scalable Thompson Sampling via Optimal Transport}, author = {Zhang, Ruiyi and Wen, Zheng and Chen, Changyou and Fang, Chen and Yu, Tong and Carin, Lawrence}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {87--96}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/zhang19a/zhang19a.pdf}, url = {http://proceedings.mlr.press/v89/zhang19a.html}, abstract = {Thompson sampling (TS) is a class of algorithms for sequential decision-making, which requires maintaining a posterior distribution over a reward model. However, calculating exact posterior distributions is intractable for all but the simplest models. Consequently, how to computationally-efficiently approximate a posterior distribution is a crucial problem for scalable TS with complex models, such as neural networks. In this paper, we use distribution optimization techniques to approximate the posterior distribution, solved via Wasserstein gradient flows. Based on the framework, a principled particle-optimization algorithm is developed for TS to approximate the posterior efficiently. Our approach is scalable and does not make explicit distribution assumptions on posterior approximations. Extensive experiments on both synthetic data and large-scale real data demonstrate the superior performance of the proposed methods.} }
Endnote
%0 Conference Paper %T Scalable Thompson Sampling via Optimal Transport %A Ruiyi Zhang %A Zheng Wen %A Changyou Chen %A Chen Fang %A Tong Yu %A Lawrence Carin %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-zhang19a %I PMLR %P 87--96 %U http://proceedings.mlr.press/v89/zhang19a.html %V 89 %X Thompson sampling (TS) is a class of algorithms for sequential decision-making, which requires maintaining a posterior distribution over a reward model. However, calculating exact posterior distributions is intractable for all but the simplest models. Consequently, how to computationally-efficiently approximate a posterior distribution is a crucial problem for scalable TS with complex models, such as neural networks. In this paper, we use distribution optimization techniques to approximate the posterior distribution, solved via Wasserstein gradient flows. Based on the framework, a principled particle-optimization algorithm is developed for TS to approximate the posterior efficiently. Our approach is scalable and does not make explicit distribution assumptions on posterior approximations. Extensive experiments on both synthetic data and large-scale real data demonstrate the superior performance of the proposed methods.
APA
Zhang, R., Wen, Z., Chen, C., Fang, C., Yu, T. & Carin, L.. (2019). Scalable Thompson Sampling via Optimal Transport. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:87-96 Available from http://proceedings.mlr.press/v89/zhang19a.html.

Related Material