Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication

Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4624-4633, 2018.

Abstract

Recently, the decentralized optimization problem is attracting growing attention. Most existing methods are deterministic with high per-iteration cost and have a convergence rate quadratically depending on the problem condition number. Besides, the dense communication is necessary to ensure the convergence even if the dataset is sparse. In this paper, we generalize the decentralized optimization problem to a monotone operator root finding problem, and propose a stochastic algorithm named DSBA that (1) converges geometrically with a rate linearly depending on the problem condition number, and (2) can be implemented using sparse communication only. Additionally, DSBA handles important learning problems like AUC-maximization which can not be tackled efficiently in the previous problem setting. Experiments on convex minimization and AUC-maximization validate the efficiency of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-shen18a, title = {Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication}, author = {Shen, Zebang and Mokhtari, Aryan and Zhou, Tengfei and Zhao, Peilin and Qian, Hui}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4624--4633}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/shen18a/shen18a.pdf}, url = {https://proceedings.mlr.press/v80/shen18a.html}, abstract = {Recently, the decentralized optimization problem is attracting growing attention. Most existing methods are deterministic with high per-iteration cost and have a convergence rate quadratically depending on the problem condition number. Besides, the dense communication is necessary to ensure the convergence even if the dataset is sparse. In this paper, we generalize the decentralized optimization problem to a monotone operator root finding problem, and propose a stochastic algorithm named DSBA that (1) converges geometrically with a rate linearly depending on the problem condition number, and (2) can be implemented using sparse communication only. Additionally, DSBA handles important learning problems like AUC-maximization which can not be tackled efficiently in the previous problem setting. Experiments on convex minimization and AUC-maximization validate the efficiency of our method.} }
Endnote
%0 Conference Paper %T Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication %A Zebang Shen %A Aryan Mokhtari %A Tengfei Zhou %A Peilin Zhao %A Hui Qian %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-shen18a %I PMLR %P 4624--4633 %U https://proceedings.mlr.press/v80/shen18a.html %V 80 %X Recently, the decentralized optimization problem is attracting growing attention. Most existing methods are deterministic with high per-iteration cost and have a convergence rate quadratically depending on the problem condition number. Besides, the dense communication is necessary to ensure the convergence even if the dataset is sparse. In this paper, we generalize the decentralized optimization problem to a monotone operator root finding problem, and propose a stochastic algorithm named DSBA that (1) converges geometrically with a rate linearly depending on the problem condition number, and (2) can be implemented using sparse communication only. Additionally, DSBA handles important learning problems like AUC-maximization which can not be tackled efficiently in the previous problem setting. Experiments on convex minimization and AUC-maximization validate the efficiency of our method.
APA
Shen, Z., Mokhtari, A., Zhou, T., Zhao, P. & Qian, H.. (2018). Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4624-4633 Available from https://proceedings.mlr.press/v80/shen18a.html.

Related Material