Covariance Selection over Networks

Wenfu Xia, Fengpei Li, Ying Sun, Ziping Zhao
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3385-3393, 2025.

Abstract

Covariance matrix estimation is a fundamental problem in multivariate data analysis, which becomes particularly challenging in high-dimensional settings due to the curse of dimensionality. To enhance estimation accuracy, structural regularization is often imposed on the precision matrix (the inverse covariance matrix) for covariance selection. In this paper, we study covariance selection in a distributed setting, where data is spread across a network of agents. We formulate the problem as a Gaussian maximum likelihood estimation problem with structural penalties and propose a novel algorithmic framework called NetGGM. Unlike existing methods that rely on a central coordinator, NetGGM operates in a fully decentralized manner with low computational complexity. We provide theoretical guarantees showing that NetGGM converges linearly to the global optimum while ensuring consensus among agents. Numerical experiments validate its convergence properties and demonstrate that it outperforms state-of-the-art methods in precision matrix estimation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-xia25a, title = {Covariance Selection over Networks}, author = {Xia, Wenfu and Li, Fengpei and Sun, Ying and Zhao, Ziping}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3385--3393}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/xia25a/xia25a.pdf}, url = {https://proceedings.mlr.press/v258/xia25a.html}, abstract = {Covariance matrix estimation is a fundamental problem in multivariate data analysis, which becomes particularly challenging in high-dimensional settings due to the curse of dimensionality. To enhance estimation accuracy, structural regularization is often imposed on the precision matrix (the inverse covariance matrix) for covariance selection. In this paper, we study covariance selection in a distributed setting, where data is spread across a network of agents. We formulate the problem as a Gaussian maximum likelihood estimation problem with structural penalties and propose a novel algorithmic framework called NetGGM. Unlike existing methods that rely on a central coordinator, NetGGM operates in a fully decentralized manner with low computational complexity. We provide theoretical guarantees showing that NetGGM converges linearly to the global optimum while ensuring consensus among agents. Numerical experiments validate its convergence properties and demonstrate that it outperforms state-of-the-art methods in precision matrix estimation.} }
Endnote
%0 Conference Paper %T Covariance Selection over Networks %A Wenfu Xia %A Fengpei Li %A Ying Sun %A Ziping Zhao %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-xia25a %I PMLR %P 3385--3393 %U https://proceedings.mlr.press/v258/xia25a.html %V 258 %X Covariance matrix estimation is a fundamental problem in multivariate data analysis, which becomes particularly challenging in high-dimensional settings due to the curse of dimensionality. To enhance estimation accuracy, structural regularization is often imposed on the precision matrix (the inverse covariance matrix) for covariance selection. In this paper, we study covariance selection in a distributed setting, where data is spread across a network of agents. We formulate the problem as a Gaussian maximum likelihood estimation problem with structural penalties and propose a novel algorithmic framework called NetGGM. Unlike existing methods that rely on a central coordinator, NetGGM operates in a fully decentralized manner with low computational complexity. We provide theoretical guarantees showing that NetGGM converges linearly to the global optimum while ensuring consensus among agents. Numerical experiments validate its convergence properties and demonstrate that it outperforms state-of-the-art methods in precision matrix estimation.
APA
Xia, W., Li, F., Sun, Y. & Zhao, Z.. (2025). Covariance Selection over Networks. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3385-3393 Available from https://proceedings.mlr.press/v258/xia25a.html.

Related Material