Low-Rank and Sparse Structure Pursuit via Alternating Minimization

Quanquan Gu, Zhaoran Wang Wang, Han Liu
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:600-609, 2016.

Abstract

In this paper, we present a nonconvex alternating minimization optimization algorithm for low-rank and sparse structure pursuit. Compared with convex relaxation based methods, the proposed algorithm is computationally more efficient for large scale problems. In our study, we define a notion of bounded difference of gradients, based on which we rigorously prove that with suitable initialization, the proposed nonconvex optimization algorithm enjoys linear convergence to the global optima and exactly recovers the underlying low rank and sparse matrices under standard conditions such as incoherence and sparsity conditions. For a wide range of statistical models such as multi-task learning and robust principal component analysis (RPCA), our algorithm provides a principled approach to learning the low rank and sparse structures with provable guarantee. Thorough experiments on both synthetic and real datasets backup our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-gu16, title = {Low-Rank and Sparse Structure Pursuit via Alternating Minimization}, author = {Gu, Quanquan and Wang, Zhaoran Wang and Liu, Han}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {600--609}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/gu16.pdf}, url = {https://proceedings.mlr.press/v51/gu16.html}, abstract = {In this paper, we present a nonconvex alternating minimization optimization algorithm for low-rank and sparse structure pursuit. Compared with convex relaxation based methods, the proposed algorithm is computationally more efficient for large scale problems. In our study, we define a notion of bounded difference of gradients, based on which we rigorously prove that with suitable initialization, the proposed nonconvex optimization algorithm enjoys linear convergence to the global optima and exactly recovers the underlying low rank and sparse matrices under standard conditions such as incoherence and sparsity conditions. For a wide range of statistical models such as multi-task learning and robust principal component analysis (RPCA), our algorithm provides a principled approach to learning the low rank and sparse structures with provable guarantee. Thorough experiments on both synthetic and real datasets backup our theory.} }
Endnote
%0 Conference Paper %T Low-Rank and Sparse Structure Pursuit via Alternating Minimization %A Quanquan Gu %A Zhaoran Wang Wang %A Han Liu %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-gu16 %I PMLR %P 600--609 %U https://proceedings.mlr.press/v51/gu16.html %V 51 %X In this paper, we present a nonconvex alternating minimization optimization algorithm for low-rank and sparse structure pursuit. Compared with convex relaxation based methods, the proposed algorithm is computationally more efficient for large scale problems. In our study, we define a notion of bounded difference of gradients, based on which we rigorously prove that with suitable initialization, the proposed nonconvex optimization algorithm enjoys linear convergence to the global optima and exactly recovers the underlying low rank and sparse matrices under standard conditions such as incoherence and sparsity conditions. For a wide range of statistical models such as multi-task learning and robust principal component analysis (RPCA), our algorithm provides a principled approach to learning the low rank and sparse structures with provable guarantee. Thorough experiments on both synthetic and real datasets backup our theory.
RIS
TY - CPAPER TI - Low-Rank and Sparse Structure Pursuit via Alternating Minimization AU - Quanquan Gu AU - Zhaoran Wang Wang AU - Han Liu BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-gu16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 600 EP - 609 L1 - http://proceedings.mlr.press/v51/gu16.pdf UR - https://proceedings.mlr.press/v51/gu16.html AB - In this paper, we present a nonconvex alternating minimization optimization algorithm for low-rank and sparse structure pursuit. Compared with convex relaxation based methods, the proposed algorithm is computationally more efficient for large scale problems. In our study, we define a notion of bounded difference of gradients, based on which we rigorously prove that with suitable initialization, the proposed nonconvex optimization algorithm enjoys linear convergence to the global optima and exactly recovers the underlying low rank and sparse matrices under standard conditions such as incoherence and sparsity conditions. For a wide range of statistical models such as multi-task learning and robust principal component analysis (RPCA), our algorithm provides a principled approach to learning the low rank and sparse structures with provable guarantee. Thorough experiments on both synthetic and real datasets backup our theory. ER -
APA
Gu, Q., Wang, Z.W. & Liu, H.. (2016). Low-Rank and Sparse Structure Pursuit via Alternating Minimization. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:600-609 Available from https://proceedings.mlr.press/v51/gu16.html.

Related Material