Adaptive Sparsity in Gaussian Graphical Models

Eleanor Wong, Suyash Awate, P. Thomas Fletcher
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):311-319, 2013.

Abstract

An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a non-informative Jeffreys’ hyperprior. We also naturally enforce the symmetry and positive-definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-wong13, title = {Adaptive Sparsity in {G}aussian Graphical Models}, author = {Wong, Eleanor and Awate, Suyash and Fletcher, P. Thomas}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {311--319}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {1}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/wong13.pdf}, url = {https://proceedings.mlr.press/v28/wong13.html}, abstract = {An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a non-informative Jeffreys’ hyperprior. We also naturally enforce the symmetry and positive-definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation.} }
Endnote
%0 Conference Paper %T Adaptive Sparsity in Gaussian Graphical Models %A Eleanor Wong %A Suyash Awate %A P. Thomas Fletcher %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-wong13 %I PMLR %P 311--319 %U https://proceedings.mlr.press/v28/wong13.html %V 28 %N 1 %X An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a non-informative Jeffreys’ hyperprior. We also naturally enforce the symmetry and positive-definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation.
RIS
TY - CPAPER TI - Adaptive Sparsity in Gaussian Graphical Models AU - Eleanor Wong AU - Suyash Awate AU - P. Thomas Fletcher BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/02/13 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-wong13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 1 SP - 311 EP - 319 L1 - http://proceedings.mlr.press/v28/wong13.pdf UR - https://proceedings.mlr.press/v28/wong13.html AB - An effective approach to structure learning and parameter estimation for Gaussian graphical models is to impose a sparsity prior, such as a Laplace prior, on the entries of the precision matrix. Such an approach involves a hyperparameter that must be tuned to control the amount of sparsity. In this paper, we introduce a parameter-free method for estimating a precision matrix with sparsity that adapts to the data automatically. We achieve this by formulating a hierarchical Bayesian model of the precision matrix with a non-informative Jeffreys’ hyperprior. We also naturally enforce the symmetry and positive-definiteness constraints on the precision matrix by parameterizing it with the Cholesky decomposition. Experiments on simulated and real (cell signaling) data demonstrate that the proposed approach not only automatically adapts the sparsity of the model, but it also results in improved estimates of the precision matrix compared to the Laplace prior model with sparsity parameter chosen by cross-validation. ER -
APA
Wong, E., Awate, S. & Fletcher, P.T.. (2013). Adaptive Sparsity in Gaussian Graphical Models. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(1):311-319 Available from https://proceedings.mlr.press/v28/wong13.html.

Related Material