Quantized Decentralized Stochastic Learning over Directed Graphs

Hossein Taheri, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9324-9333, 2020.

Abstract

We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph. As the model size gets large, decentralized learning faces a major bottleneck that is the heavy communication load due to each node transmitting large messages (model updates) to its neighbors. To tackle this bottleneck, we propose the quantized decentralized stochastic learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization. We prove that our algorithm achieves the same convergence rates of the decentralized stochastic learning algorithm with exact-communication for both convex and non-convex losses. Numerical evaluations corroborate our main theoretical results and illustrate significant speed-up compared to the exact-communication methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-taheri20a, title = {Quantized Decentralized Stochastic Learning over Directed Graphs}, author = {Taheri, Hossein and Mokhtari, Aryan and Hassani, Hamed and Pedarsani, Ramtin}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9324--9333}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/taheri20a/taheri20a.pdf}, url = {https://proceedings.mlr.press/v119/taheri20a.html}, abstract = {We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph. As the model size gets large, decentralized learning faces a major bottleneck that is the heavy communication load due to each node transmitting large messages (model updates) to its neighbors. To tackle this bottleneck, we propose the quantized decentralized stochastic learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization. We prove that our algorithm achieves the same convergence rates of the decentralized stochastic learning algorithm with exact-communication for both convex and non-convex losses. Numerical evaluations corroborate our main theoretical results and illustrate significant speed-up compared to the exact-communication methods.} }
Endnote
%0 Conference Paper %T Quantized Decentralized Stochastic Learning over Directed Graphs %A Hossein Taheri %A Aryan Mokhtari %A Hamed Hassani %A Ramtin Pedarsani %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-taheri20a %I PMLR %P 9324--9333 %U https://proceedings.mlr.press/v119/taheri20a.html %V 119 %X We consider a decentralized stochastic learning problem where data points are distributed among computing nodes communicating over a directed graph. As the model size gets large, decentralized learning faces a major bottleneck that is the heavy communication load due to each node transmitting large messages (model updates) to its neighbors. To tackle this bottleneck, we propose the quantized decentralized stochastic learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization. We prove that our algorithm achieves the same convergence rates of the decentralized stochastic learning algorithm with exact-communication for both convex and non-convex losses. Numerical evaluations corroborate our main theoretical results and illustrate significant speed-up compared to the exact-communication methods.
APA
Taheri, H., Mokhtari, A., Hassani, H. & Pedarsani, R.. (2020). Quantized Decentralized Stochastic Learning over Directed Graphs. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9324-9333 Available from https://proceedings.mlr.press/v119/taheri20a.html.

Related Material