Learning Latent Variable Gaussian Graphical Models

Zhaoshi Meng, Brian Eriksson, Al Hero
Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1269-1277, 2014.

Abstract

Gaussian graphical models (GGM) have been widely used in many high-dimensional applications ranging from biological and financial data to recommender systems. Sparsity in GGM plays a central role both statistically and computationally. Unfortunately, real-world data often does not fit well to sparse graphical models. In this paper, we focus on a family of latent variable Gaussian graphical models (LVGGM), where the model is conditionally sparse given latent variables, but marginally non-sparse. In LVGGM, the inverse covariance matrix has a low-rank plus sparse structure, and can be learned in a regularized maximum likelihood framework. We derive novel parameter estimation error bounds for LVGGM under mild conditions in the high-dimensional setting. These results complement the existing theory on the structural learning, and open up new possibilities of using LVGGM for statistical inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-meng14, title = {Learning Latent Variable Gaussian Graphical Models}, author = {Meng, Zhaoshi and Eriksson, Brian and Hero, Al}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1269--1277}, year = {2014}, editor = {Xing, Eric P. and Jebara, Tony}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/meng14.pdf}, url = {https://proceedings.mlr.press/v32/meng14.html}, abstract = {Gaussian graphical models (GGM) have been widely used in many high-dimensional applications ranging from biological and financial data to recommender systems. Sparsity in GGM plays a central role both statistically and computationally. Unfortunately, real-world data often does not fit well to sparse graphical models. In this paper, we focus on a family of latent variable Gaussian graphical models (LVGGM), where the model is conditionally sparse given latent variables, but marginally non-sparse. In LVGGM, the inverse covariance matrix has a low-rank plus sparse structure, and can be learned in a regularized maximum likelihood framework. We derive novel parameter estimation error bounds for LVGGM under mild conditions in the high-dimensional setting. These results complement the existing theory on the structural learning, and open up new possibilities of using LVGGM for statistical inference.} }
Endnote
%0 Conference Paper %T Learning Latent Variable Gaussian Graphical Models %A Zhaoshi Meng %A Brian Eriksson %A Al Hero %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-meng14 %I PMLR %P 1269--1277 %U https://proceedings.mlr.press/v32/meng14.html %V 32 %N 2 %X Gaussian graphical models (GGM) have been widely used in many high-dimensional applications ranging from biological and financial data to recommender systems. Sparsity in GGM plays a central role both statistically and computationally. Unfortunately, real-world data often does not fit well to sparse graphical models. In this paper, we focus on a family of latent variable Gaussian graphical models (LVGGM), where the model is conditionally sparse given latent variables, but marginally non-sparse. In LVGGM, the inverse covariance matrix has a low-rank plus sparse structure, and can be learned in a regularized maximum likelihood framework. We derive novel parameter estimation error bounds for LVGGM under mild conditions in the high-dimensional setting. These results complement the existing theory on the structural learning, and open up new possibilities of using LVGGM for statistical inference.
RIS
TY - CPAPER TI - Learning Latent Variable Gaussian Graphical Models AU - Zhaoshi Meng AU - Brian Eriksson AU - Al Hero BT - Proceedings of the 31st International Conference on Machine Learning DA - 2014/06/18 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-meng14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 32 IS - 2 SP - 1269 EP - 1277 L1 - http://proceedings.mlr.press/v32/meng14.pdf UR - https://proceedings.mlr.press/v32/meng14.html AB - Gaussian graphical models (GGM) have been widely used in many high-dimensional applications ranging from biological and financial data to recommender systems. Sparsity in GGM plays a central role both statistically and computationally. Unfortunately, real-world data often does not fit well to sparse graphical models. In this paper, we focus on a family of latent variable Gaussian graphical models (LVGGM), where the model is conditionally sparse given latent variables, but marginally non-sparse. In LVGGM, the inverse covariance matrix has a low-rank plus sparse structure, and can be learned in a regularized maximum likelihood framework. We derive novel parameter estimation error bounds for LVGGM under mild conditions in the high-dimensional setting. These results complement the existing theory on the structural learning, and open up new possibilities of using LVGGM for statistical inference. ER -
APA
Meng, Z., Eriksson, B. & Hero, A.. (2014). Learning Latent Variable Gaussian Graphical Models. Proceedings of the 31st International Conference on Machine Learning, in Proceedings of Machine Learning Research 32(2):1269-1277 Available from https://proceedings.mlr.press/v32/meng14.html.

Related Material