Adaptive Rao-Blackwellisation in Gibbs Sampling for Probabilistic Graphical Models

Craig Kelly, Somdeb Sarkhel, Deepak Venugopal
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:2907-2915, 2019.

Abstract

Rao-Blackwellisation is a technique that provably improves the performance of Gibbs sampling by summing-out variables from the PGM. However, collapsing variables is computationally expensive, since it changes the PGM structure introducing factors whose size is dependent upon the Markov blanket of the variable. Therefore, collapsing out several variables jointly is typically intractable in arbitrary PGM structures. In this paper, we propose an adaptive approach for Rao-Blackwellisation, where we add parallel Markov chains defined over different collapsed PGM structures. The collapsed variables are chosen based on their convergence diagnostics. However, adding a new chain requires burn-in, thus wasting samples. To address this, we initialize the new chains from a mean field approximation for the distribution, that improves over time, thus reducing the burn-in period. Our experiments on several UAI benchmarks shows that our approach is more accurate than state-of-the-art inference systems such as Merlin that implements algorithms that have previously won the UAI inference challenge.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-kelly19a, title = {Adaptive Rao-Blackwellisation in Gibbs Sampling for Probabilistic Graphical Models}, author = {Kelly, Craig and Sarkhel, Somdeb and Venugopal, Deepak}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {2907--2915}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/kelly19a/kelly19a.pdf}, url = {https://proceedings.mlr.press/v89/kelly19a.html}, abstract = {Rao-Blackwellisation is a technique that provably improves the performance of Gibbs sampling by summing-out variables from the PGM. However, collapsing variables is computationally expensive, since it changes the PGM structure introducing factors whose size is dependent upon the Markov blanket of the variable. Therefore, collapsing out several variables jointly is typically intractable in arbitrary PGM structures. In this paper, we propose an adaptive approach for Rao-Blackwellisation, where we add parallel Markov chains defined over different collapsed PGM structures. The collapsed variables are chosen based on their convergence diagnostics. However, adding a new chain requires burn-in, thus wasting samples. To address this, we initialize the new chains from a mean field approximation for the distribution, that improves over time, thus reducing the burn-in period. Our experiments on several UAI benchmarks shows that our approach is more accurate than state-of-the-art inference systems such as Merlin that implements algorithms that have previously won the UAI inference challenge.} }
Endnote
%0 Conference Paper %T Adaptive Rao-Blackwellisation in Gibbs Sampling for Probabilistic Graphical Models %A Craig Kelly %A Somdeb Sarkhel %A Deepak Venugopal %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-kelly19a %I PMLR %P 2907--2915 %U https://proceedings.mlr.press/v89/kelly19a.html %V 89 %X Rao-Blackwellisation is a technique that provably improves the performance of Gibbs sampling by summing-out variables from the PGM. However, collapsing variables is computationally expensive, since it changes the PGM structure introducing factors whose size is dependent upon the Markov blanket of the variable. Therefore, collapsing out several variables jointly is typically intractable in arbitrary PGM structures. In this paper, we propose an adaptive approach for Rao-Blackwellisation, where we add parallel Markov chains defined over different collapsed PGM structures. The collapsed variables are chosen based on their convergence diagnostics. However, adding a new chain requires burn-in, thus wasting samples. To address this, we initialize the new chains from a mean field approximation for the distribution, that improves over time, thus reducing the burn-in period. Our experiments on several UAI benchmarks shows that our approach is more accurate than state-of-the-art inference systems such as Merlin that implements algorithms that have previously won the UAI inference challenge.
APA
Kelly, C., Sarkhel, S. & Venugopal, D.. (2019). Adaptive Rao-Blackwellisation in Gibbs Sampling for Probabilistic Graphical Models. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:2907-2915 Available from https://proceedings.mlr.press/v89/kelly19a.html.

Related Material