Projected subgradient methods for learning sparse Gaussians

John Duchi, Stephen Gould, Daphne Koller
Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, PMLR R6:153-160, 2008.

Abstract

Gaussian Markov random fields (GMRFs) are useful in a broad range of applications. In this paper we tackle the problem of learning a sparse GMRF in a high-dimensional space. Our approach uses the ℓ1-norm as a regularization on the inverse covariance matrix. We utilize a novel projected gradient method, which is faster than previous methods in practice and equal to the best performing of these in asymptotic complexity. We also extend the ℓ1-regularized objective to the problem of sparsifying entire blocks within the inverse covariance matrix. Our methods generalize fairly easily to this case, while other methods do not. We demonstrate that our extensions give better generalization performance on two real domains—biological network analysis and a 2D-shape modeling image task.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR6-duchi08a, title = {Projected subgradient methods for learning sparse Gaussians}, author = {Duchi, John and Gould, Stephen and Koller, Daphne}, booktitle = {Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence}, pages = {153--160}, year = {2008}, editor = {McAllester, David A. and Myllymäki, Petri}, volume = {R6}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/r6/main/assets/duchi08a/duchi08a.pdf}, url = {https://proceedings.mlr.press/r6/duchi08a.html}, abstract = {Gaussian Markov random fields (GMRFs) are useful in a broad range of applications. In this paper we tackle the problem of learning a sparse GMRF in a high-dimensional space. Our approach uses the ℓ1-norm as a regularization on the inverse covariance matrix. We utilize a novel projected gradient method, which is faster than previous methods in practice and equal to the best performing of these in asymptotic complexity. We also extend the ℓ1-regularized objective to the problem of sparsifying entire blocks within the inverse covariance matrix. Our methods generalize fairly easily to this case, while other methods do not. We demonstrate that our extensions give better generalization performance on two real domains—biological network analysis and a 2D-shape modeling image task.}, note = {Reissued by PMLR on 09 October 2024.} }
Endnote
%0 Conference Paper %T Projected subgradient methods for learning sparse Gaussians %A John Duchi %A Stephen Gould %A Daphne Koller %B Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2008 %E David A. McAllester %E Petri Myllymäki %F pmlr-vR6-duchi08a %I PMLR %P 153--160 %U https://proceedings.mlr.press/r6/duchi08a.html %V R6 %X Gaussian Markov random fields (GMRFs) are useful in a broad range of applications. In this paper we tackle the problem of learning a sparse GMRF in a high-dimensional space. Our approach uses the ℓ1-norm as a regularization on the inverse covariance matrix. We utilize a novel projected gradient method, which is faster than previous methods in practice and equal to the best performing of these in asymptotic complexity. We also extend the ℓ1-regularized objective to the problem of sparsifying entire blocks within the inverse covariance matrix. Our methods generalize fairly easily to this case, while other methods do not. We demonstrate that our extensions give better generalization performance on two real domains—biological network analysis and a 2D-shape modeling image task. %Z Reissued by PMLR on 09 October 2024.
APA
Duchi, J., Gould, S. & Koller, D.. (2008). Projected subgradient methods for learning sparse Gaussians. Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research R6:153-160 Available from https://proceedings.mlr.press/r6/duchi08a.html. Reissued by PMLR on 09 October 2024.

Related Material