A Unified Computational and Statistical Framework for Nonconvex Low-rank Matrix Estimation

Lingxiao Wang, Xiao Zhang, Quanquan Gu
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:981-990, 2017.

Abstract

We propose a unified framework for estimating low-rank matrices through nonconvex optimization based on gradient descent algorithm. Our framework is quite general and can be applied to both noisy and noiseless observations. In the general case with noisy observations, we show that our algorithm is guaranteed to linearly converge to the unknown low-rank matrix up to a minimax optimal statistical error, provided an appropriate initial estimator. While in the generic noiseless setting, our algorithm converges to the unknown low-rank matrix at a linear rate and enables exact recovery with optimal sample complexity. In addition, we develop a new initialization algorithm to provide the desired initial estimator, which outperforms existing initialization algorithms for nonconvex low-rank matrix estimation. We illustrate the superiority of our framework through three examples: matrix regression, matrix completion, and one-bit matrix completion. We also corroborate our theory through extensive experiments on synthetic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v54-wang17b, title = {{A Unified Computational and Statistical Framework for Nonconvex Low-rank Matrix Estimation}}, author = {Wang, Lingxiao and Zhang, Xiao and Gu, Quanquan}, booktitle = {Proceedings of the 20th International Conference on Artificial Intelligence and Statistics}, pages = {981--990}, year = {2017}, editor = {Singh, Aarti and Zhu, Jerry}, volume = {54}, series = {Proceedings of Machine Learning Research}, month = {20--22 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v54/wang17b/wang17b.pdf}, url = {https://proceedings.mlr.press/v54/wang17b.html}, abstract = {We propose a unified framework for estimating low-rank matrices through nonconvex optimization based on gradient descent algorithm. Our framework is quite general and can be applied to both noisy and noiseless observations. In the general case with noisy observations, we show that our algorithm is guaranteed to linearly converge to the unknown low-rank matrix up to a minimax optimal statistical error, provided an appropriate initial estimator. While in the generic noiseless setting, our algorithm converges to the unknown low-rank matrix at a linear rate and enables exact recovery with optimal sample complexity. In addition, we develop a new initialization algorithm to provide the desired initial estimator, which outperforms existing initialization algorithms for nonconvex low-rank matrix estimation. We illustrate the superiority of our framework through three examples: matrix regression, matrix completion, and one-bit matrix completion. We also corroborate our theory through extensive experiments on synthetic data.} }
Endnote
%0 Conference Paper %T A Unified Computational and Statistical Framework for Nonconvex Low-rank Matrix Estimation %A Lingxiao Wang %A Xiao Zhang %A Quanquan Gu %B Proceedings of the 20th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2017 %E Aarti Singh %E Jerry Zhu %F pmlr-v54-wang17b %I PMLR %P 981--990 %U https://proceedings.mlr.press/v54/wang17b.html %V 54 %X We propose a unified framework for estimating low-rank matrices through nonconvex optimization based on gradient descent algorithm. Our framework is quite general and can be applied to both noisy and noiseless observations. In the general case with noisy observations, we show that our algorithm is guaranteed to linearly converge to the unknown low-rank matrix up to a minimax optimal statistical error, provided an appropriate initial estimator. While in the generic noiseless setting, our algorithm converges to the unknown low-rank matrix at a linear rate and enables exact recovery with optimal sample complexity. In addition, we develop a new initialization algorithm to provide the desired initial estimator, which outperforms existing initialization algorithms for nonconvex low-rank matrix estimation. We illustrate the superiority of our framework through three examples: matrix regression, matrix completion, and one-bit matrix completion. We also corroborate our theory through extensive experiments on synthetic data.
APA
Wang, L., Zhang, X. & Gu, Q.. (2017). A Unified Computational and Statistical Framework for Nonconvex Low-rank Matrix Estimation. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 54:981-990 Available from https://proceedings.mlr.press/v54/wang17b.html.

Related Material