Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA

Chun-Liang Li, Hsuan-Tien Lin, Chi-Jen Lu
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, PMLR 51:473-481, 2016.

Abstract

We study the problem of recovering the subspace spanned by the first k principal components of d-dimensional data under the streaming setting, with a memory bound of O(kd). Two families of algorithms are known for this problem. The first family is based on the framework of stochastic gradient descent. Nevertheless, the convergence rate of the family can be seriously affected by the learning rate of the descent steps and deserves more serious study. The second family is based on the power method over blocks of data, but setting the block size for its existing algorithms is not an easy task. In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general k>1 case. Moreover, we propose a novel algorithm for the second family that sets the block sizes automatically and dynamically with faster convergence rate. We then conduct empirical studies that fairly compare the two families on real-world data. The studies reveal the advantages and disadvantages of these two families.

Cite this Paper


BibTeX
@InProceedings{pmlr-v51-li16b, title = {Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA}, author = {Li, Chun-Liang and Lin, Hsuan-Tien and Lu, Chi-Jen}, booktitle = {Proceedings of the 19th International Conference on Artificial Intelligence and Statistics}, pages = {473--481}, year = {2016}, editor = {Gretton, Arthur and Robert, Christian C.}, volume = {51}, series = {Proceedings of Machine Learning Research}, address = {Cadiz, Spain}, month = {09--11 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v51/li16b.pdf}, url = {https://proceedings.mlr.press/v51/li16b.html}, abstract = {We study the problem of recovering the subspace spanned by the first k principal components of d-dimensional data under the streaming setting, with a memory bound of O(kd). Two families of algorithms are known for this problem. The first family is based on the framework of stochastic gradient descent. Nevertheless, the convergence rate of the family can be seriously affected by the learning rate of the descent steps and deserves more serious study. The second family is based on the power method over blocks of data, but setting the block size for its existing algorithms is not an easy task. In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general k>1 case. Moreover, we propose a novel algorithm for the second family that sets the block sizes automatically and dynamically with faster convergence rate. We then conduct empirical studies that fairly compare the two families on real-world data. The studies reveal the advantages and disadvantages of these two families.} }
Endnote
%0 Conference Paper %T Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA %A Chun-Liang Li %A Hsuan-Tien Lin %A Chi-Jen Lu %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2016 %E Arthur Gretton %E Christian C. Robert %F pmlr-v51-li16b %I PMLR %P 473--481 %U https://proceedings.mlr.press/v51/li16b.html %V 51 %X We study the problem of recovering the subspace spanned by the first k principal components of d-dimensional data under the streaming setting, with a memory bound of O(kd). Two families of algorithms are known for this problem. The first family is based on the framework of stochastic gradient descent. Nevertheless, the convergence rate of the family can be seriously affected by the learning rate of the descent steps and deserves more serious study. The second family is based on the power method over blocks of data, but setting the block size for its existing algorithms is not an easy task. In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general k>1 case. Moreover, we propose a novel algorithm for the second family that sets the block sizes automatically and dynamically with faster convergence rate. We then conduct empirical studies that fairly compare the two families on real-world data. The studies reveal the advantages and disadvantages of these two families.
RIS
TY - CPAPER TI - Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA AU - Chun-Liang Li AU - Hsuan-Tien Lin AU - Chi-Jen Lu BT - Proceedings of the 19th International Conference on Artificial Intelligence and Statistics DA - 2016/05/02 ED - Arthur Gretton ED - Christian C. Robert ID - pmlr-v51-li16b PB - PMLR DP - Proceedings of Machine Learning Research VL - 51 SP - 473 EP - 481 L1 - http://proceedings.mlr.press/v51/li16b.pdf UR - https://proceedings.mlr.press/v51/li16b.html AB - We study the problem of recovering the subspace spanned by the first k principal components of d-dimensional data under the streaming setting, with a memory bound of O(kd). Two families of algorithms are known for this problem. The first family is based on the framework of stochastic gradient descent. Nevertheless, the convergence rate of the family can be seriously affected by the learning rate of the descent steps and deserves more serious study. The second family is based on the power method over blocks of data, but setting the block size for its existing algorithms is not an easy task. In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general k>1 case. Moreover, we propose a novel algorithm for the second family that sets the block sizes automatically and dynamically with faster convergence rate. We then conduct empirical studies that fairly compare the two families on real-world data. The studies reveal the advantages and disadvantages of these two families. ER -
APA
Li, C., Lin, H. & Lu, C.. (2016). Rivalry of Two Families of Algorithms for Memory-Restricted Streaming PCA. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 51:473-481 Available from https://proceedings.mlr.press/v51/li16b.html.

Related Material