Coded Sparse Matrix Multiplication

Sinong Wang, Jiashang Liu, Ness Shroff
; Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5152-5160, 2018.

Abstract

In a large-scale and distributed matrix multiplication problem $C=A^{\intercal}B$, where $C\in\mathbb{R}^{r\times t}$, the coded computation plays an important role to effectively deal with “stragglers” (distributed computations that may get delayed due to few slow or faulty processors). However, existing coded schemes could destroy the significant sparsity that exists in large-scale machine learning problems, and could result in much higher computation overhead, i.e., $O(rt)$ decoding time. In this paper, we develop a new coded computation strategy, we call sparse code, which achieves near optimal recovery threshold, low computation overhead, and linear decoding time $O(nnz(C))$. We implement our scheme and demonstrate the advantage of the approach over both uncoded and current fastest coded strategies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-wang18e, title = {Coded Sparse Matrix Multiplication}, author = {Wang, Sinong and Liu, Jiashang and Shroff, Ness}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5152--5160}, year = {2018}, editor = {Jennifer Dy and Andreas Krause}, volume = {80}, series = {Proceedings of Machine Learning Research}, address = {Stockholmsmässan, Stockholm Sweden}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/wang18e/wang18e.pdf}, url = {http://proceedings.mlr.press/v80/wang18e.html}, abstract = {In a large-scale and distributed matrix multiplication problem $C=A^{\intercal}B$, where $C\in\mathbb{R}^{r\times t}$, the coded computation plays an important role to effectively deal with “stragglers” (distributed computations that may get delayed due to few slow or faulty processors). However, existing coded schemes could destroy the significant sparsity that exists in large-scale machine learning problems, and could result in much higher computation overhead, i.e., $O(rt)$ decoding time. In this paper, we develop a new coded computation strategy, we call sparse code, which achieves near optimal recovery threshold, low computation overhead, and linear decoding time $O(nnz(C))$. We implement our scheme and demonstrate the advantage of the approach over both uncoded and current fastest coded strategies.} }
Endnote
%0 Conference Paper %T Coded Sparse Matrix Multiplication %A Sinong Wang %A Jiashang Liu %A Ness Shroff %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-wang18e %I PMLR %J Proceedings of Machine Learning Research %P 5152--5160 %U http://proceedings.mlr.press %V 80 %W PMLR %X In a large-scale and distributed matrix multiplication problem $C=A^{\intercal}B$, where $C\in\mathbb{R}^{r\times t}$, the coded computation plays an important role to effectively deal with “stragglers” (distributed computations that may get delayed due to few slow or faulty processors). However, existing coded schemes could destroy the significant sparsity that exists in large-scale machine learning problems, and could result in much higher computation overhead, i.e., $O(rt)$ decoding time. In this paper, we develop a new coded computation strategy, we call sparse code, which achieves near optimal recovery threshold, low computation overhead, and linear decoding time $O(nnz(C))$. We implement our scheme and demonstrate the advantage of the approach over both uncoded and current fastest coded strategies.
APA
Wang, S., Liu, J. & Shroff, N.. (2018). Coded Sparse Matrix Multiplication. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:5152-5160

Related Material