Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models

Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel Stefankovic, Eric Vigoda
; Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:440-449, 2017.

Abstract

The Gibbs sampler is the most popular Markov chain used for learning and inference problems in Graphical Models (GM). These tasks are computationally intractable in general, and the Gibbs sampler often suffers from slow mixing. In this paper, we study the Swendsen-Wang dynamics which is a more sophisticated Markov chain designed to overcome bottlenecks that impede Gibbs sampler. We prove O(log n) mixing time for attractive binary pairwise GMs (i.e., ferromagnetic Ising models) on stochastic partitioned graphs having n vertices, under some mild conditions including low temperature regions where the Gibbs sampler provably mixes exponentially slow. Our experiments also confirm that the Swendsen-Wang sampler significantly outperforms the Gibbs sampler for learning parameters of attractive GMs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-park17b, title = {{Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models}}, author = {Sejun Park and Yunhun Jang and Andreas Galanis and Jinwoo Shin and Daniel Stefankovic and Eric Vigoda}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {440--449}, year = {2017}, editor = {Aarti Singh and Jerry Zhu}, volume = {54}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/park17b/park17b.pdf}, url = {http://proceedings.mlr.press/v54/park17b.html}, abstract = {The Gibbs sampler is the most popular Markov chain used for learning and inference problems in Graphical Models (GM). These tasks are computationally intractable in general, and the Gibbs sampler often suffers from slow mixing. In this paper, we study the Swendsen-Wang dynamics which is a more sophisticated Markov chain designed to overcome bottlenecks that impede Gibbs sampler. We prove O(log n) mixing time for attractive binary pairwise GMs (i.e., ferromagnetic Ising models) on stochastic partitioned graphs having n vertices, under some mild conditions including low temperature regions where the Gibbs sampler provably mixes exponentially slow. Our experiments also confirm that the Swendsen-Wang sampler significantly outperforms the Gibbs sampler for learning parameters of attractive GMs.} }
Endnote
%0 Conference Paper %T Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models %A Sejun Park %A Yunhun Jang %A Andreas Galanis %A Jinwoo Shin %A Daniel Stefankovic %A Eric Vigoda %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-park17b %I PMLR %J Proceedings of Machine Learning Research %P 440--449 %U http://proceedings.mlr.press %V 54 %W PMLR %X The Gibbs sampler is the most popular Markov chain used for learning and inference problems in Graphical Models (GM). These tasks are computationally intractable in general, and the Gibbs sampler often suffers from slow mixing. In this paper, we study the Swendsen-Wang dynamics which is a more sophisticated Markov chain designed to overcome bottlenecks that impede Gibbs sampler. We prove O(log n) mixing time for attractive binary pairwise GMs (i.e., ferromagnetic Ising models) on stochastic partitioned graphs having n vertices, under some mild conditions including low temperature regions where the Gibbs sampler provably mixes exponentially slow. Our experiments also confirm that the Swendsen-Wang sampler significantly outperforms the Gibbs sampler for learning parameters of attractive GMs.
APA
Park, S., Jang, Y., Galanis, A., Shin, J., Stefankovic, D. & Vigoda, E.. (2017). Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in PMLR 54:440-449

Related Material