Adaptive Random Walk Gradient Descent for Decentralized Optimization

Tao Sun, Dongsheng Li, Bao Wang
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:20790-20809, 2022.

Abstract

In this paper, we study the adaptive step size random walk gradient descent with momentum for decentralized optimization, in which the training samples are drawn dependently with each other. We establish theoretical convergence rates of the adaptive step size random walk gradient descent with momentum for both convex and nonconvex settings. In particular, we prove that adaptive random walk algorithms perform as well as the non-adaptive method for dependent data in general cases but achieve acceleration when the stochastic gradients are “sparse”. Moreover, we study the zeroth-order version of adaptive random walk gradient descent and provide corresponding convergence results. All assumptions used in this paper are mild and general, making our results applicable to many machine learning problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-sun22b, title = {Adaptive Random Walk Gradient Descent for Decentralized Optimization}, author = {Sun, Tao and Li, Dongsheng and Wang, Bao}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {20790--20809}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/sun22b/sun22b.pdf}, url = {https://proceedings.mlr.press/v162/sun22b.html}, abstract = {In this paper, we study the adaptive step size random walk gradient descent with momentum for decentralized optimization, in which the training samples are drawn dependently with each other. We establish theoretical convergence rates of the adaptive step size random walk gradient descent with momentum for both convex and nonconvex settings. In particular, we prove that adaptive random walk algorithms perform as well as the non-adaptive method for dependent data in general cases but achieve acceleration when the stochastic gradients are “sparse”. Moreover, we study the zeroth-order version of adaptive random walk gradient descent and provide corresponding convergence results. All assumptions used in this paper are mild and general, making our results applicable to many machine learning problems.} }
Endnote
%0 Conference Paper %T Adaptive Random Walk Gradient Descent for Decentralized Optimization %A Tao Sun %A Dongsheng Li %A Bao Wang %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-sun22b %I PMLR %P 20790--20809 %U https://proceedings.mlr.press/v162/sun22b.html %V 162 %X In this paper, we study the adaptive step size random walk gradient descent with momentum for decentralized optimization, in which the training samples are drawn dependently with each other. We establish theoretical convergence rates of the adaptive step size random walk gradient descent with momentum for both convex and nonconvex settings. In particular, we prove that adaptive random walk algorithms perform as well as the non-adaptive method for dependent data in general cases but achieve acceleration when the stochastic gradients are “sparse”. Moreover, we study the zeroth-order version of adaptive random walk gradient descent and provide corresponding convergence results. All assumptions used in this paper are mild and general, making our results applicable to many machine learning problems.
APA
Sun, T., Li, D. & Wang, B.. (2022). Adaptive Random Walk Gradient Descent for Decentralized Optimization. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:20790-20809 Available from https://proceedings.mlr.press/v162/sun22b.html.

Related Material