Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization

Jinghui Chen, Pan Xu, Lingxiao Wang, Jian Ma, Quanquan Gu
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:922-931, 2018.

Abstract

We propose a nonconvex estimator for the covariate adjusted precision matrix estimation problem in the high dimensional regime, under sparsity constraints. To solve this estimator, we propose an alternating gradient descent algorithm with hard thresholding. Compared with existing methods along this line of research, which lack theoretical guarantees in optimization error and/or statistical error, the proposed algorithm not only is computationally much more efficient with a linear rate of convergence, but also attains the optimal statistical rate up to a logarithmic factor. Thorough experiments on both synthetic and real data support our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-chen18n, title = {Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization}, author = {Chen, Jinghui and Xu, Pan and Wang, Lingxiao and Ma, Jian and Gu, Quanquan}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {922--931}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/chen18n/chen18n.pdf}, url = {https://proceedings.mlr.press/v80/chen18n.html}, abstract = {We propose a nonconvex estimator for the covariate adjusted precision matrix estimation problem in the high dimensional regime, under sparsity constraints. To solve this estimator, we propose an alternating gradient descent algorithm with hard thresholding. Compared with existing methods along this line of research, which lack theoretical guarantees in optimization error and/or statistical error, the proposed algorithm not only is computationally much more efficient with a linear rate of convergence, but also attains the optimal statistical rate up to a logarithmic factor. Thorough experiments on both synthetic and real data support our theory.} }
Endnote
%0 Conference Paper %T Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization %A Jinghui Chen %A Pan Xu %A Lingxiao Wang %A Jian Ma %A Quanquan Gu %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-chen18n %I PMLR %P 922--931 %U https://proceedings.mlr.press/v80/chen18n.html %V 80 %X We propose a nonconvex estimator for the covariate adjusted precision matrix estimation problem in the high dimensional regime, under sparsity constraints. To solve this estimator, we propose an alternating gradient descent algorithm with hard thresholding. Compared with existing methods along this line of research, which lack theoretical guarantees in optimization error and/or statistical error, the proposed algorithm not only is computationally much more efficient with a linear rate of convergence, but also attains the optimal statistical rate up to a logarithmic factor. Thorough experiments on both synthetic and real data support our theory.
APA
Chen, J., Xu, P., Wang, L., Ma, J. & Gu, Q.. (2018). Covariate Adjusted Precision Matrix Estimation via Nonconvex Optimization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:922-931 Available from https://proceedings.mlr.press/v80/chen18n.html.

Related Material