On the Convergence of Decentralized Adaptive Gradient Methods

Xiangyi Chen, Belhal Karimi, Weijie Zhao, Ping Li
Proceedings of The 14th Asian Conference on Machine Learning, PMLR 189:217-232, 2023.

Abstract

Adaptive gradient methods including Adam, AdaGrad, and their variants have been very successful for training deep learning models, such as neural networks. Meanwhile, given the need for distributed computing, distributed optimization algorithms are rapidly becoming a focal point. With the growth of computing power and the need for using machine learning models on mobile devices, the communication cost of distributed training algorithms needs careful consideration. In this paper, we introduce novel convergent decentralized adaptive gradient methods and rigorously incorporate adaptive gradient methods into decentralized training procedures. Specifically, we propose a general algorithmic framework that can convert existing adaptive gradient methods to their decentralized counterparts. In addition, we thoroughly analyze the convergence behavior of the proposed algorithmic framework and show that if a given adaptive gradient method converges, under some specific conditions, then its decentralized counterpart is also convergent. We illustrate the benefit of our generic decentralized framework on prototype methods, AMSGrad and AdaGrad.

Cite this Paper


BibTeX
@InProceedings{pmlr-v189-chen23b, title = {On the Convergence of Decentralized Adaptive Gradient Methods}, author = {Chen, Xiangyi and Karimi, Belhal and Zhao, Weijie and Li, Ping}, booktitle = {Proceedings of The 14th Asian Conference on Machine Learning}, pages = {217--232}, year = {2023}, editor = {Khan, Emtiyaz and Gonen, Mehmet}, volume = {189}, series = {Proceedings of Machine Learning Research}, month = {12--14 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v189/chen23b/chen23b.pdf}, url = {https://proceedings.mlr.press/v189/chen23b.html}, abstract = {Adaptive gradient methods including Adam, AdaGrad, and their variants have been very successful for training deep learning models, such as neural networks. Meanwhile, given the need for distributed computing, distributed optimization algorithms are rapidly becoming a focal point. With the growth of computing power and the need for using machine learning models on mobile devices, the communication cost of distributed training algorithms needs careful consideration. In this paper, we introduce novel convergent decentralized adaptive gradient methods and rigorously incorporate adaptive gradient methods into decentralized training procedures. Specifically, we propose a general algorithmic framework that can convert existing adaptive gradient methods to their decentralized counterparts. In addition, we thoroughly analyze the convergence behavior of the proposed algorithmic framework and show that if a given adaptive gradient method converges, under some specific conditions, then its decentralized counterpart is also convergent. We illustrate the benefit of our generic decentralized framework on prototype methods, AMSGrad and AdaGrad.} }
Endnote
%0 Conference Paper %T On the Convergence of Decentralized Adaptive Gradient Methods %A Xiangyi Chen %A Belhal Karimi %A Weijie Zhao %A Ping Li %B Proceedings of The 14th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Emtiyaz Khan %E Mehmet Gonen %F pmlr-v189-chen23b %I PMLR %P 217--232 %U https://proceedings.mlr.press/v189/chen23b.html %V 189 %X Adaptive gradient methods including Adam, AdaGrad, and their variants have been very successful for training deep learning models, such as neural networks. Meanwhile, given the need for distributed computing, distributed optimization algorithms are rapidly becoming a focal point. With the growth of computing power and the need for using machine learning models on mobile devices, the communication cost of distributed training algorithms needs careful consideration. In this paper, we introduce novel convergent decentralized adaptive gradient methods and rigorously incorporate adaptive gradient methods into decentralized training procedures. Specifically, we propose a general algorithmic framework that can convert existing adaptive gradient methods to their decentralized counterparts. In addition, we thoroughly analyze the convergence behavior of the proposed algorithmic framework and show that if a given adaptive gradient method converges, under some specific conditions, then its decentralized counterpart is also convergent. We illustrate the benefit of our generic decentralized framework on prototype methods, AMSGrad and AdaGrad.
APA
Chen, X., Karimi, B., Zhao, W. & Li, P.. (2023). On the Convergence of Decentralized Adaptive Gradient Methods. Proceedings of The 14th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 189:217-232 Available from https://proceedings.mlr.press/v189/chen23b.html.

Related Material