Variational Inference for Sparse and Undirected Models

John Ingraham, Debora Marks
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1607-1616, 2017.

Abstract

Undirected graphical models are applied in genomics, protein structure prediction, and neuroscience to identify sparse interactions that underlie discrete data. Although Bayesian methods for inference would be favorable in these contexts, they are rarely used because they require doubly intractable Monte Carlo sampling. Here, we develop a framework for scalable Bayesian inference of discrete undirected models based on two new methods. The first is Persistent VI, an algorithm for variational inference of discrete undirected models that avoids doubly intractable MCMC and approximations of the partition function. The second is Fadeout, a reparameterization approach for variational inference under sparsity-inducing priors that captures a posteriori correlations between parameters and hyperparameters with noncentered parameterizations. We find that, together, these methods for variational inference substantially improve learning of sparse undirected graphical models in simulated and real problems from physics and biology.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-ingraham17a, title = {Variational Inference for Sparse and Undirected Models}, author = {John Ingraham and Debora Marks}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1607--1616}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/ingraham17a/ingraham17a.pdf}, url = {https://proceedings.mlr.press/v70/ingraham17a.html}, abstract = {Undirected graphical models are applied in genomics, protein structure prediction, and neuroscience to identify sparse interactions that underlie discrete data. Although Bayesian methods for inference would be favorable in these contexts, they are rarely used because they require doubly intractable Monte Carlo sampling. Here, we develop a framework for scalable Bayesian inference of discrete undirected models based on two new methods. The first is Persistent VI, an algorithm for variational inference of discrete undirected models that avoids doubly intractable MCMC and approximations of the partition function. The second is Fadeout, a reparameterization approach for variational inference under sparsity-inducing priors that captures a posteriori correlations between parameters and hyperparameters with noncentered parameterizations. We find that, together, these methods for variational inference substantially improve learning of sparse undirected graphical models in simulated and real problems from physics and biology.} }
Endnote
%0 Conference Paper %T Variational Inference for Sparse and Undirected Models %A John Ingraham %A Debora Marks %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-ingraham17a %I PMLR %P 1607--1616 %U https://proceedings.mlr.press/v70/ingraham17a.html %V 70 %X Undirected graphical models are applied in genomics, protein structure prediction, and neuroscience to identify sparse interactions that underlie discrete data. Although Bayesian methods for inference would be favorable in these contexts, they are rarely used because they require doubly intractable Monte Carlo sampling. Here, we develop a framework for scalable Bayesian inference of discrete undirected models based on two new methods. The first is Persistent VI, an algorithm for variational inference of discrete undirected models that avoids doubly intractable MCMC and approximations of the partition function. The second is Fadeout, a reparameterization approach for variational inference under sparsity-inducing priors that captures a posteriori correlations between parameters and hyperparameters with noncentered parameterizations. We find that, together, these methods for variational inference substantially improve learning of sparse undirected graphical models in simulated and real problems from physics and biology.
APA
Ingraham, J. & Marks, D.. (2017). Variational Inference for Sparse and Undirected Models. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1607-1616 Available from https://proceedings.mlr.press/v70/ingraham17a.html.

Related Material