Coded Sparse Matrix Multiplication

Sinong Wang, Jiashang Liu, Ness Shroff
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5152-5160, 2018.

Abstract

In a large-scale and distributed matrix multiplication problem $C=A^{\intercal}B$, where $C\in\mathbb{R}^{r\times t}$, the coded computation plays an important role to effectively deal with “stragglers” (distributed computations that may get delayed due to few slow or faulty processors). However, existing coded schemes could destroy the significant sparsity that exists in large-scale machine learning problems, and could result in much higher computation overhead, i.e., $O(rt)$ decoding time. In this paper, we develop a new coded computation strategy, we call sparse code, which achieves near optimal recovery threshold, low computation overhead, and linear decoding time $O(nnz(C))$. We implement our scheme and demonstrate the advantage of the approach over both uncoded and current fastest coded strategies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-wang18e, title = {Coded Sparse Matrix Multiplication}, author = {Wang, Sinong and Liu, Jiashang and Shroff, Ness}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5152--5160}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/wang18e/wang18e.pdf}, url = {https://proceedings.mlr.press/v80/wang18e.html}, abstract = {In a large-scale and distributed matrix multiplication problem $C=A^{\intercal}B$, where $C\in\mathbb{R}^{r\times t}$, the coded computation plays an important role to effectively deal with “stragglers” (distributed computations that may get delayed due to few slow or faulty processors). However, existing coded schemes could destroy the significant sparsity that exists in large-scale machine learning problems, and could result in much higher computation overhead, i.e., $O(rt)$ decoding time. In this paper, we develop a new coded computation strategy, we call sparse code, which achieves near optimal recovery threshold, low computation overhead, and linear decoding time $O(nnz(C))$. We implement our scheme and demonstrate the advantage of the approach over both uncoded and current fastest coded strategies.} }
Endnote
%0 Conference Paper %T Coded Sparse Matrix Multiplication %A Sinong Wang %A Jiashang Liu %A Ness Shroff %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-wang18e %I PMLR %P 5152--5160 %U https://proceedings.mlr.press/v80/wang18e.html %V 80 %X In a large-scale and distributed matrix multiplication problem $C=A^{\intercal}B$, where $C\in\mathbb{R}^{r\times t}$, the coded computation plays an important role to effectively deal with “stragglers” (distributed computations that may get delayed due to few slow or faulty processors). However, existing coded schemes could destroy the significant sparsity that exists in large-scale machine learning problems, and could result in much higher computation overhead, i.e., $O(rt)$ decoding time. In this paper, we develop a new coded computation strategy, we call sparse code, which achieves near optimal recovery threshold, low computation overhead, and linear decoding time $O(nnz(C))$. We implement our scheme and demonstrate the advantage of the approach over both uncoded and current fastest coded strategies.
APA
Wang, S., Liu, J. & Shroff, N.. (2018). Coded Sparse Matrix Multiplication. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5152-5160 Available from https://proceedings.mlr.press/v80/wang18e.html.

Related Material