Scalable Discrete Sampling as a Multi-Armed Bandit Problem

Yutian Chen, Zoubin Ghahramani
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:2492-2501, 2016.

Abstract

Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-chenb16, title = {Scalable Discrete Sampling as a Multi-Armed Bandit Problem}, author = {Chen, Yutian and Ghahramani, Zoubin}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {2492--2501}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/chenb16.pdf}, url = {https://proceedings.mlr.press/v48/chenb16.html}, abstract = {Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.} }
Endnote
%0 Conference Paper %T Scalable Discrete Sampling as a Multi-Armed Bandit Problem %A Yutian Chen %A Zoubin Ghahramani %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-chenb16 %I PMLR %P 2492--2501 %U https://proceedings.mlr.press/v48/chenb16.html %V 48 %X Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems.
RIS
TY - CPAPER TI - Scalable Discrete Sampling as a Multi-Armed Bandit Problem AU - Yutian Chen AU - Zoubin Ghahramani BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-chenb16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 2492 EP - 2501 L1 - http://proceedings.mlr.press/v48/chenb16.pdf UR - https://proceedings.mlr.press/v48/chenb16.html AB - Drawing a sample from a discrete distribution is one of the building components for Monte Carlo methods. Like other sampling algorithms, discrete sampling suffers from the high computational burden in large-scale inference problems. We study the problem of sampling a discrete random variable with a high degree of dependency that is typical in large-scale Bayesian inference and graphical models, and propose an efficient approximate solution with a subsampling approach. We make a novel connection between the discrete sampling and Multi-Armed Bandits problems with a finite reward population and provide three algorithms with theoretical guarantees. Empirical evaluations show the robustness and efficiency of the approximate algorithms in both synthetic and real-world large-scale problems. ER -
APA
Chen, Y. & Ghahramani, Z.. (2016). Scalable Discrete Sampling as a Multi-Armed Bandit Problem. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:2492-2501 Available from https://proceedings.mlr.press/v48/chenb16.html.

Related Material