Riemannian Pursuit for Big Matrix Recovery

Mingkui Tan, Ivor W. Tsang, Li Wang, Bart Vandereycken, Sinno Jialin Pan
; Proceedings of the 31st International Conference on Machine Learning, PMLR 32(2):1539-1547, 2014.

Abstract

Low rank matrix recovery is a fundamental task in many real-world applications. The performance of existing methods, however, deteriorates significantly when applied to ill-conditioned or large-scale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method consists of a sequence of fixed-rank optimization problems. Each subproblem, solved by a nonlinear Riemannian conjugate gradient method, aims to correct the solution in the most important subspace of increasing size. Theoretically, RP converges linearly under mild conditions and experimental results show that it substantially outperforms existing methods when applied to large-scale and ill-conditioned matrices.

Cite this Paper


BibTeX
@InProceedings{pmlr-v32-tan14, title = {Riemannian Pursuit for Big Matrix Recovery}, author = {Mingkui Tan and Ivor W. Tsang and Li Wang and Bart Vandereycken and Sinno Jialin Pan}, booktitle = {Proceedings of the 31st International Conference on Machine Learning}, pages = {1539--1547}, year = {2014}, editor = {Eric P. Xing and Tony Jebara}, volume = {32}, number = {2}, series = {Proceedings of Machine Learning Research}, address = {Bejing, China}, month = {22--24 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v32/tan14.pdf}, url = {http://proceedings.mlr.press/v32/tan14.html}, abstract = {Low rank matrix recovery is a fundamental task in many real-world applications. The performance of existing methods, however, deteriorates significantly when applied to ill-conditioned or large-scale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method consists of a sequence of fixed-rank optimization problems. Each subproblem, solved by a nonlinear Riemannian conjugate gradient method, aims to correct the solution in the most important subspace of increasing size. Theoretically, RP converges linearly under mild conditions and experimental results show that it substantially outperforms existing methods when applied to large-scale and ill-conditioned matrices.} }
Endnote
%0 Conference Paper %T Riemannian Pursuit for Big Matrix Recovery %A Mingkui Tan %A Ivor W. Tsang %A Li Wang %A Bart Vandereycken %A Sinno Jialin Pan %B Proceedings of the 31st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2014 %E Eric P. Xing %E Tony Jebara %F pmlr-v32-tan14 %I PMLR %J Proceedings of Machine Learning Research %P 1539--1547 %U http://proceedings.mlr.press %V 32 %N 2 %W PMLR %X Low rank matrix recovery is a fundamental task in many real-world applications. The performance of existing methods, however, deteriorates significantly when applied to ill-conditioned or large-scale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method consists of a sequence of fixed-rank optimization problems. Each subproblem, solved by a nonlinear Riemannian conjugate gradient method, aims to correct the solution in the most important subspace of increasing size. Theoretically, RP converges linearly under mild conditions and experimental results show that it substantially outperforms existing methods when applied to large-scale and ill-conditioned matrices.
RIS
TY - CPAPER TI - Riemannian Pursuit for Big Matrix Recovery AU - Mingkui Tan AU - Ivor W. Tsang AU - Li Wang AU - Bart Vandereycken AU - Sinno Jialin Pan BT - Proceedings of the 31st International Conference on Machine Learning PY - 2014/01/27 DA - 2014/01/27 ED - Eric P. Xing ED - Tony Jebara ID - pmlr-v32-tan14 PB - PMLR SP - 1539 DP - PMLR EP - 1547 L1 - http://proceedings.mlr.press/v32/tan14.pdf UR - http://proceedings.mlr.press/v32/tan14.html AB - Low rank matrix recovery is a fundamental task in many real-world applications. The performance of existing methods, however, deteriorates significantly when applied to ill-conditioned or large-scale matrices. In this paper, we therefore propose an efficient method, called Riemannian Pursuit (RP), that aims to address these two problems simultaneously. Our method consists of a sequence of fixed-rank optimization problems. Each subproblem, solved by a nonlinear Riemannian conjugate gradient method, aims to correct the solution in the most important subspace of increasing size. Theoretically, RP converges linearly under mild conditions and experimental results show that it substantially outperforms existing methods when applied to large-scale and ill-conditioned matrices. ER -
APA
Tan, M., Tsang, I.W., Wang, L., Vandereycken, B. & Pan, S.J.. (2014). Riemannian Pursuit for Big Matrix Recovery. Proceedings of the 31st International Conference on Machine Learning, in PMLR 32(2):1539-1547

Related Material