Stochastic Gradient Push for Distributed Deep Learning

Mahmoud Assran, Nicolas Loizou, Nicolas Ballas, Mike Rabbat
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:344-353, 2019.

Abstract

Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes. Approaches that synchronize nodes using exact distributed averaging (e.g., via AllReduce) are sensitive to stragglers and communication delays. The PushSum gossip algorithm is robust to these issues, but only performs approximate distributed averaging. This paper studies Stochastic Gradient Push (SGP), which combines PushSum with stochastic gradient updates. We prove that SGP converges to a stationary point of smooth, non-convex objectives at the same sub-linear rate as SGD, and that all nodes achieve consensus. We empirically validate the performance of SGP on image classification (ResNet-50, ImageNet) and machine translation (Transformer, WMT’16 En-De) workloads.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-assran19a, title = {Stochastic Gradient Push for Distributed Deep Learning}, author = {Assran, Mahmoud and Loizou, Nicolas and Ballas, Nicolas and Rabbat, Mike}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {344--353}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/assran19a/assran19a.pdf}, url = {https://proceedings.mlr.press/v97/assran19a.html}, abstract = {Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes. Approaches that synchronize nodes using exact distributed averaging (e.g., via AllReduce) are sensitive to stragglers and communication delays. The PushSum gossip algorithm is robust to these issues, but only performs approximate distributed averaging. This paper studies Stochastic Gradient Push (SGP), which combines PushSum with stochastic gradient updates. We prove that SGP converges to a stationary point of smooth, non-convex objectives at the same sub-linear rate as SGD, and that all nodes achieve consensus. We empirically validate the performance of SGP on image classification (ResNet-50, ImageNet) and machine translation (Transformer, WMT’16 En-De) workloads.} }
Endnote
%0 Conference Paper %T Stochastic Gradient Push for Distributed Deep Learning %A Mahmoud Assran %A Nicolas Loizou %A Nicolas Ballas %A Mike Rabbat %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-assran19a %I PMLR %P 344--353 %U https://proceedings.mlr.press/v97/assran19a.html %V 97 %X Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes. Approaches that synchronize nodes using exact distributed averaging (e.g., via AllReduce) are sensitive to stragglers and communication delays. The PushSum gossip algorithm is robust to these issues, but only performs approximate distributed averaging. This paper studies Stochastic Gradient Push (SGP), which combines PushSum with stochastic gradient updates. We prove that SGP converges to a stationary point of smooth, non-convex objectives at the same sub-linear rate as SGD, and that all nodes achieve consensus. We empirically validate the performance of SGP on image classification (ResNet-50, ImageNet) and machine translation (Transformer, WMT’16 En-De) workloads.
APA
Assran, M., Loizou, N., Ballas, N. & Rabbat, M.. (2019). Stochastic Gradient Push for Distributed Deep Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:344-353 Available from https://proceedings.mlr.press/v97/assran19a.html.

Related Material