Towards Maximum Likelihood: Learning Undirected Graphical Models using Persistent Sequential Monte Carlo

Hanchen Xiong, Sandor Szedmak, Justus Piater
Proceedings of the Sixth Asian Conference on Machine Learning, PMLR 39:205-220, 2015.

Abstract

Along with the emergence of algorithms such as persistent contrastive divergence (PCD), tempered transition and parallel tempering, the past decade has witnessed a revival of learning undirected graphical models (UGMs) with sampling-based approximations. In this paper, based upon the analogy between Robbins-Monro’s stochastic approximation procedure and sequential Monte Carlo (SMC), we analyze the strengths and limitations of state-of-the-art learning algorithms from an SMC point of view. Moreover, we apply the rationale further in sampling at each iteration, and propose to learn UGMs using persistent sequential Monte Carlo (PSMC). The whole learning procedure is based on the samples from a long, persistent sequence of distributions which are actively constructed. Compared to the above-mentioned algorithms, one critical strength of PSMC- based learning is that it can explore the sampling space more effectively. In particular, it is robust when learning rates are large or model distributions are high-dimensional and thus multi-modal, which often causes other algorithms to deteriorate. We tested PSMC learning, also with other related methods, on carefully-designed experiments with both synthetic and real-word data, and our empirical results demonstrate that PSMC compares favorably with the state of the art.

Cite this Paper


BibTeX
@InProceedings{pmlr-v39-xiong14, title = {Towards Maximum Likelihood: Learning Undirected Graphical Models using Persistent Sequential Monte Carlo}, author = {Xiong, Hanchen and Szedmak, Sandor and Piater, Justus}, booktitle = {Proceedings of the Sixth Asian Conference on Machine Learning}, pages = {205--220}, year = {2015}, editor = {Phung, Dinh and Li, Hang}, volume = {39}, series = {Proceedings of Machine Learning Research}, address = {Nha Trang City, Vietnam}, month = {26--28 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v39/xiong14.pdf}, url = {https://proceedings.mlr.press/v39/xiong14.html}, abstract = {Along with the emergence of algorithms such as persistent contrastive divergence (PCD), tempered transition and parallel tempering, the past decade has witnessed a revival of learning undirected graphical models (UGMs) with sampling-based approximations. In this paper, based upon the analogy between Robbins-Monro’s stochastic approximation procedure and sequential Monte Carlo (SMC), we analyze the strengths and limitations of state-of-the-art learning algorithms from an SMC point of view. Moreover, we apply the rationale further in sampling at each iteration, and propose to learn UGMs using persistent sequential Monte Carlo (PSMC). The whole learning procedure is based on the samples from a long, persistent sequence of distributions which are actively constructed. Compared to the above-mentioned algorithms, one critical strength of PSMC- based learning is that it can explore the sampling space more effectively. In particular, it is robust when learning rates are large or model distributions are high-dimensional and thus multi-modal, which often causes other algorithms to deteriorate. We tested PSMC learning, also with other related methods, on carefully-designed experiments with both synthetic and real-word data, and our empirical results demonstrate that PSMC compares favorably with the state of the art.} }
Endnote
%0 Conference Paper %T Towards Maximum Likelihood: Learning Undirected Graphical Models using Persistent Sequential Monte Carlo %A Hanchen Xiong %A Sandor Szedmak %A Justus Piater %B Proceedings of the Sixth Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Dinh Phung %E Hang Li %F pmlr-v39-xiong14 %I PMLR %P 205--220 %U https://proceedings.mlr.press/v39/xiong14.html %V 39 %X Along with the emergence of algorithms such as persistent contrastive divergence (PCD), tempered transition and parallel tempering, the past decade has witnessed a revival of learning undirected graphical models (UGMs) with sampling-based approximations. In this paper, based upon the analogy between Robbins-Monro’s stochastic approximation procedure and sequential Monte Carlo (SMC), we analyze the strengths and limitations of state-of-the-art learning algorithms from an SMC point of view. Moreover, we apply the rationale further in sampling at each iteration, and propose to learn UGMs using persistent sequential Monte Carlo (PSMC). The whole learning procedure is based on the samples from a long, persistent sequence of distributions which are actively constructed. Compared to the above-mentioned algorithms, one critical strength of PSMC- based learning is that it can explore the sampling space more effectively. In particular, it is robust when learning rates are large or model distributions are high-dimensional and thus multi-modal, which often causes other algorithms to deteriorate. We tested PSMC learning, also with other related methods, on carefully-designed experiments with both synthetic and real-word data, and our empirical results demonstrate that PSMC compares favorably with the state of the art.
RIS
TY - CPAPER TI - Towards Maximum Likelihood: Learning Undirected Graphical Models using Persistent Sequential Monte Carlo AU - Hanchen Xiong AU - Sandor Szedmak AU - Justus Piater BT - Proceedings of the Sixth Asian Conference on Machine Learning DA - 2015/02/16 ED - Dinh Phung ED - Hang Li ID - pmlr-v39-xiong14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 39 SP - 205 EP - 220 L1 - http://proceedings.mlr.press/v39/xiong14.pdf UR - https://proceedings.mlr.press/v39/xiong14.html AB - Along with the emergence of algorithms such as persistent contrastive divergence (PCD), tempered transition and parallel tempering, the past decade has witnessed a revival of learning undirected graphical models (UGMs) with sampling-based approximations. In this paper, based upon the analogy between Robbins-Monro’s stochastic approximation procedure and sequential Monte Carlo (SMC), we analyze the strengths and limitations of state-of-the-art learning algorithms from an SMC point of view. Moreover, we apply the rationale further in sampling at each iteration, and propose to learn UGMs using persistent sequential Monte Carlo (PSMC). The whole learning procedure is based on the samples from a long, persistent sequence of distributions which are actively constructed. Compared to the above-mentioned algorithms, one critical strength of PSMC- based learning is that it can explore the sampling space more effectively. In particular, it is robust when learning rates are large or model distributions are high-dimensional and thus multi-modal, which often causes other algorithms to deteriorate. We tested PSMC learning, also with other related methods, on carefully-designed experiments with both synthetic and real-word data, and our empirical results demonstrate that PSMC compares favorably with the state of the art. ER -
APA
Xiong, H., Szedmak, S. & Piater, J.. (2015). Towards Maximum Likelihood: Learning Undirected Graphical Models using Persistent Sequential Monte Carlo. Proceedings of the Sixth Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 39:205-220 Available from https://proceedings.mlr.press/v39/xiong14.html.

Related Material