A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery

Lingxiao Wang, Xiao Zhang, Quanquan Gu
; Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3712-3721, 2017.

Abstract

We propose a generic framework based on a new stochastic variance-reduced gradient descent algorithm for accelerating nonconvex low-rank matrix recovery. Starting from an appropriate initial estimator, our proposed algorithm performs projected gradient descent based on a novel semi-stochastic gradient specifically designed for low-rank matrix recovery. Based upon the mild restricted strong convexity and smoothness conditions, we derive a projected notion of the restricted Lipschitz continuous gradient property, and prove that our algorithm enjoys linear convergence rate to the unknown low-rank matrix with an improved computational complexity. Moreover, our algorithm can be employed to both noiseless and noisy observations, where the (near) optimal sample complexity and statistical rate can be attained respectively. We further illustrate the superiority of our generic framework through several specific examples, both theoretically and experimentally.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-wang17n, title = {A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery}, author = {Lingxiao Wang and Xiao Zhang and Quanquan Gu}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {3712--3721}, year = {2017}, editor = {Doina Precup and Yee Whye Teh}, volume = {70}, series = {Proceedings of Machine Learning Research}, address = {International Convention Centre, Sydney, Australia}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/wang17n/wang17n.pdf}, url = {http://proceedings.mlr.press/v70/wang17n.html}, abstract = {We propose a generic framework based on a new stochastic variance-reduced gradient descent algorithm for accelerating nonconvex low-rank matrix recovery. Starting from an appropriate initial estimator, our proposed algorithm performs projected gradient descent based on a novel semi-stochastic gradient specifically designed for low-rank matrix recovery. Based upon the mild restricted strong convexity and smoothness conditions, we derive a projected notion of the restricted Lipschitz continuous gradient property, and prove that our algorithm enjoys linear convergence rate to the unknown low-rank matrix with an improved computational complexity. Moreover, our algorithm can be employed to both noiseless and noisy observations, where the (near) optimal sample complexity and statistical rate can be attained respectively. We further illustrate the superiority of our generic framework through several specific examples, both theoretically and experimentally.} }
Endnote
%0 Conference Paper %T A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery %A Lingxiao Wang %A Xiao Zhang %A Quanquan Gu %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-wang17n %I PMLR %J Proceedings of Machine Learning Research %P 3712--3721 %U http://proceedings.mlr.press %V 70 %W PMLR %X We propose a generic framework based on a new stochastic variance-reduced gradient descent algorithm for accelerating nonconvex low-rank matrix recovery. Starting from an appropriate initial estimator, our proposed algorithm performs projected gradient descent based on a novel semi-stochastic gradient specifically designed for low-rank matrix recovery. Based upon the mild restricted strong convexity and smoothness conditions, we derive a projected notion of the restricted Lipschitz continuous gradient property, and prove that our algorithm enjoys linear convergence rate to the unknown low-rank matrix with an improved computational complexity. Moreover, our algorithm can be employed to both noiseless and noisy observations, where the (near) optimal sample complexity and statistical rate can be attained respectively. We further illustrate the superiority of our generic framework through several specific examples, both theoretically and experimentally.
APA
Wang, L., Zhang, X. & Gu, Q.. (2017). A Unified Variance Reduction-Based Framework for Nonconvex Low-Rank Matrix Recovery. Proceedings of the 34th International Conference on Machine Learning, in PMLR 70:3712-3721

Related Material