Asynchronous Gibbs Sampling

Alexander Terenin, Daniel Simpson, David Draper
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:144-154, 2020.

Abstract

Gibbs sampling is a Markov Chain Monte Carlo (MCMC) method often used in Bayesian learning. MCMC methods can be difficult to deploy on parallel and distributed systems due to their inherently sequential nature. We study asynchronous Gibbs sampling, which achieves parallelism by simply ignoring sequential requirements. This method has been shown to produce good empirical results for some hierarchical models, and is popular in the topic modeling community, but was also shown to diverge for other targets. We introduce a theoretical framework for analyzing asynchronous Gibbs sampling and other extensions of MCMC that do not possess the Markov property. We prove that asynchronous Gibbs can be modified so that it converges under appropriate regularity conditions - we call this the exact asynchronous Gibbs algorithm. We study asynchronous Gibbs on a set of examples by comparing the exact and approximate algorithms, including two where it works well, and one where it fails dramatically. We conclude with a set of heuristics to describe settings where the algorithm can be effectively used.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-terenin20a, title = {Asynchronous Gibbs Sampling}, author = {Terenin, Alexander and Simpson, Daniel and Draper, David}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {144--154}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/terenin20a/terenin20a.pdf}, url = {https://proceedings.mlr.press/v108/terenin20a.html}, abstract = {Gibbs sampling is a Markov Chain Monte Carlo (MCMC) method often used in Bayesian learning. MCMC methods can be difficult to deploy on parallel and distributed systems due to their inherently sequential nature. We study asynchronous Gibbs sampling, which achieves parallelism by simply ignoring sequential requirements. This method has been shown to produce good empirical results for some hierarchical models, and is popular in the topic modeling community, but was also shown to diverge for other targets. We introduce a theoretical framework for analyzing asynchronous Gibbs sampling and other extensions of MCMC that do not possess the Markov property. We prove that asynchronous Gibbs can be modified so that it converges under appropriate regularity conditions - we call this the exact asynchronous Gibbs algorithm. We study asynchronous Gibbs on a set of examples by comparing the exact and approximate algorithms, including two where it works well, and one where it fails dramatically. We conclude with a set of heuristics to describe settings where the algorithm can be effectively used.} }
Endnote
%0 Conference Paper %T Asynchronous Gibbs Sampling %A Alexander Terenin %A Daniel Simpson %A David Draper %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-terenin20a %I PMLR %P 144--154 %U https://proceedings.mlr.press/v108/terenin20a.html %V 108 %X Gibbs sampling is a Markov Chain Monte Carlo (MCMC) method often used in Bayesian learning. MCMC methods can be difficult to deploy on parallel and distributed systems due to their inherently sequential nature. We study asynchronous Gibbs sampling, which achieves parallelism by simply ignoring sequential requirements. This method has been shown to produce good empirical results for some hierarchical models, and is popular in the topic modeling community, but was also shown to diverge for other targets. We introduce a theoretical framework for analyzing asynchronous Gibbs sampling and other extensions of MCMC that do not possess the Markov property. We prove that asynchronous Gibbs can be modified so that it converges under appropriate regularity conditions - we call this the exact asynchronous Gibbs algorithm. We study asynchronous Gibbs on a set of examples by comparing the exact and approximate algorithms, including two where it works well, and one where it fails dramatically. We conclude with a set of heuristics to describe settings where the algorithm can be effectively used.
APA
Terenin, A., Simpson, D. & Draper, D.. (2020). Asynchronous Gibbs Sampling. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:144-154 Available from https://proceedings.mlr.press/v108/terenin20a.html.

Related Material