Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation

Dejiao Zhang, Laura Balzano
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:1460-1468, 2016.

Abstract

It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the d-dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index t, and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-zhang16b, title = {Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation}, author = {Zhang, Dejiao and Balzano, Laura}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {1460--1468}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/zhang16b.pdf}, url = {https://proceedings.mlr.press/v51/zhang16b.html}, abstract = {It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the d-dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index t, and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.} }
Endnote
%0 Conference Paper %T Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation %A Dejiao Zhang %A Laura Balzano %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-zhang16b %I PMLR %P 1460--1468 %U https://proceedings.mlr.press/v51/zhang16b.html %V 51 %X It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the d-dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index t, and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration.
RIS
TY - CPAPER TI - Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation AU - Dejiao Zhang AU - Laura Balzano BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-zhang16b PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 1460 EP - 1468 L1 - http://proceedings.mlr.press/v51/zhang16b.pdf UR - https://proceedings.mlr.press/v51/zhang16b.html AB - It has been observed in a variety of contexts that gradient descent methods have great success in solving low-rank matrix factorization problems, despite the relevant problem formulation being non-convex. We tackle a particular instance of this scenario, where we seek the d-dimensional subspace spanned by a streaming data matrix. We apply the natural first order incremental gradient descent method, constraining the gradient method to the Grassmannian. In this paper, we propose an adaptive step size scheme that is greedy for the noiseless case, that maximizes the improvement of our metric of convergence at each data index t, and yields an expected improvement for the noisy case. We show that, with noise-free data, this method converges from any random initialization to the global minimum of the problem. For noisy data, we provide the expected convergence rate of the proposed algorithm per iteration. ER -
APA
Zhang, D. & Balzano, L.. (2016). Global Convergence of a Grassmannian Gradient Descent Algorithm for Subspace Estimation. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:1460-1468 Available from https://proceedings.mlr.press/v51/zhang16b.html.

Related Material