Communication-Efficient Distributed SVD via Local Power Iterations

Xiang Li, Shusen Wang, Kun Chen, Zhihua Zhang
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:6504-6514, 2021.

Abstract

We study distributed computing of the truncated singular value decomposition (SVD). We develop an algorithm that we call \texttt{LocalPower} for improving communication efficiency. Specifically, we uniformly partition the dataset among m nodes and alternate between multiple (precisely p) local power iterations and one global aggregation. In the aggregation, we propose to weight each local eigenvector matrix with orthogonal Procrustes transformation (OPT). As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with ±1 entries as weights, has better computation complexity and stability in experiments. We theoretically show that under certain assumptions \texttt{LocalPower} lowers the required number of communications by a factor of p to reach a constant accuracy. We also show that the strategy of periodically decaying p helps obtain high-precision solutions. We conduct experiments to demonstrate the effectiveness of \texttt{LocalPower}.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-li21u, title = {Communication-Efficient Distributed SVD via Local Power Iterations}, author = {Li, Xiang and Wang, Shusen and Chen, Kun and Zhang, Zhihua}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {6504--6514}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/li21u/li21u.pdf}, url = {https://proceedings.mlr.press/v139/li21u.html}, abstract = {We study distributed computing of the truncated singular value decomposition (SVD). We develop an algorithm that we call \texttt{LocalPower} for improving communication efficiency. Specifically, we uniformly partition the dataset among $m$ nodes and alternate between multiple (precisely $p$) local power iterations and one global aggregation. In the aggregation, we propose to weight each local eigenvector matrix with orthogonal Procrustes transformation (OPT). As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with $\pm 1$ entries as weights, has better computation complexity and stability in experiments. We theoretically show that under certain assumptions \texttt{LocalPower} lowers the required number of communications by a factor of $p$ to reach a constant accuracy. We also show that the strategy of periodically decaying $p$ helps obtain high-precision solutions. We conduct experiments to demonstrate the effectiveness of \texttt{LocalPower}.} }
Endnote
%0 Conference Paper %T Communication-Efficient Distributed SVD via Local Power Iterations %A Xiang Li %A Shusen Wang %A Kun Chen %A Zhihua Zhang %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-li21u %I PMLR %P 6504--6514 %U https://proceedings.mlr.press/v139/li21u.html %V 139 %X We study distributed computing of the truncated singular value decomposition (SVD). We develop an algorithm that we call \texttt{LocalPower} for improving communication efficiency. Specifically, we uniformly partition the dataset among $m$ nodes and alternate between multiple (precisely $p$) local power iterations and one global aggregation. In the aggregation, we propose to weight each local eigenvector matrix with orthogonal Procrustes transformation (OPT). As a practical surrogate of OPT, sign-fixing, which uses a diagonal matrix with $\pm 1$ entries as weights, has better computation complexity and stability in experiments. We theoretically show that under certain assumptions \texttt{LocalPower} lowers the required number of communications by a factor of $p$ to reach a constant accuracy. We also show that the strategy of periodically decaying $p$ helps obtain high-precision solutions. We conduct experiments to demonstrate the effectiveness of \texttt{LocalPower}.
APA
Li, X., Wang, S., Chen, K. & Zhang, Z.. (2021). Communication-Efficient Distributed SVD via Local Power Iterations. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:6504-6514 Available from https://proceedings.mlr.press/v139/li21u.html.

Related Material