Principal Component Analysis and Higher Correlations for Distributed Data

Ravi Kannan, Santosh Vempala, David Woodruff
Proceedings of The 27th Conference on Learning Theory, PMLR 35:1040-1057, 2014.

Abstract

We consider algorithmic problems in the setting in which the input data has been partitioned arbitrarily on many servers. The goal is to compute a function of all the data, and the bottleneck is the communication used by the algorithm. We present algorithms for two illustrative problems on massive data sets: (1) computing a low-rank approximation of a matrix A=A^1 + A^2 + \ldots + A^s, with matrix A^t stored on server t and (2) computing a function of a vector a_1 + a_2 + \ldots + a_s, where server t has the vector a_t; this includes the well-studied special case of computing frequency moments and separable functions, as well as higher-order correlations such as the number of subgraphs of a specified type occurring in a graph. For both problems we give algorithms with nearly optimal communication, and in particular the only dependence on n, the size of the data, is in the number of bits needed to represent indices and words (O(\log n)).

Cite this Paper


BibTeX
@InProceedings{pmlr-v35-kannan14, title = {Principal Component Analysis and Higher Correlations for Distributed Data}, author = {Kannan, Ravi and Vempala, Santosh and Woodruff, David}, booktitle = {Proceedings of The 27th Conference on Learning Theory}, pages = {1040--1057}, year = {2014}, editor = {Balcan, Maria Florina and Feldman, Vitaly and Szepesvári, Csaba}, volume = {35}, series = {Proceedings of Machine Learning Research}, address = {Barcelona, Spain}, month = {13--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v35/kannan14.pdf}, url = {https://proceedings.mlr.press/v35/kannan14.html}, abstract = {We consider algorithmic problems in the setting in which the input data has been partitioned arbitrarily on many servers. The goal is to compute a function of all the data, and the bottleneck is the communication used by the algorithm. We present algorithms for two illustrative problems on massive data sets: (1) computing a low-rank approximation of a matrix A=A^1 + A^2 + \ldots + A^s, with matrix A^t stored on server t and (2) computing a function of a vector a_1 + a_2 + \ldots + a_s, where server t has the vector a_t; this includes the well-studied special case of computing frequency moments and separable functions, as well as higher-order correlations such as the number of subgraphs of a specified type occurring in a graph. For both problems we give algorithms with nearly optimal communication, and in particular the only dependence on n, the size of the data, is in the number of bits needed to represent indices and words (O(\log n)). } }
Endnote
%0 Conference Paper %T Principal Component Analysis and Higher Correlations for Distributed Data %A Ravi Kannan %A Santosh Vempala %A David Woodruff %B Proceedings of The 27th Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2014 %E Maria Florina Balcan %E Vitaly Feldman %E Csaba Szepesvári %F pmlr-v35-kannan14 %I PMLR %P 1040--1057 %U https://proceedings.mlr.press/v35/kannan14.html %V 35 %X We consider algorithmic problems in the setting in which the input data has been partitioned arbitrarily on many servers. The goal is to compute a function of all the data, and the bottleneck is the communication used by the algorithm. We present algorithms for two illustrative problems on massive data sets: (1) computing a low-rank approximation of a matrix A=A^1 + A^2 + \ldots + A^s, with matrix A^t stored on server t and (2) computing a function of a vector a_1 + a_2 + \ldots + a_s, where server t has the vector a_t; this includes the well-studied special case of computing frequency moments and separable functions, as well as higher-order correlations such as the number of subgraphs of a specified type occurring in a graph. For both problems we give algorithms with nearly optimal communication, and in particular the only dependence on n, the size of the data, is in the number of bits needed to represent indices and words (O(\log n)).
RIS
TY - CPAPER TI - Principal Component Analysis and Higher Correlations for Distributed Data AU - Ravi Kannan AU - Santosh Vempala AU - David Woodruff BT - Proceedings of The 27th Conference on Learning Theory DA - 2014/05/29 ED - Maria Florina Balcan ED - Vitaly Feldman ED - Csaba Szepesvári ID - pmlr-v35-kannan14 PB - PMLR DP - Proceedings of Machine Learning Research VL - 35 SP - 1040 EP - 1057 L1 - http://proceedings.mlr.press/v35/kannan14.pdf UR - https://proceedings.mlr.press/v35/kannan14.html AB - We consider algorithmic problems in the setting in which the input data has been partitioned arbitrarily on many servers. The goal is to compute a function of all the data, and the bottleneck is the communication used by the algorithm. We present algorithms for two illustrative problems on massive data sets: (1) computing a low-rank approximation of a matrix A=A^1 + A^2 + \ldots + A^s, with matrix A^t stored on server t and (2) computing a function of a vector a_1 + a_2 + \ldots + a_s, where server t has the vector a_t; this includes the well-studied special case of computing frequency moments and separable functions, as well as higher-order correlations such as the number of subgraphs of a specified type occurring in a graph. For both problems we give algorithms with nearly optimal communication, and in particular the only dependence on n, the size of the data, is in the number of bits needed to represent indices and words (O(\log n)). ER -
APA
Kannan, R., Vempala, S. & Woodruff, D.. (2014). Principal Component Analysis and Higher Correlations for Distributed Data. Proceedings of The 27th Conference on Learning Theory, in Proceedings of Machine Learning Research 35:1040-1057 Available from https://proceedings.mlr.press/v35/kannan14.html.

Related Material