Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time

Yuyang Deng, Mohammad Mahdi Kamani, Mehrdad Mahdavi
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:6840-6861, 2022.

Abstract

In this paper we prove that Local (S)GD (or FedAvg) can optimize deep neural networks with Rectified Linear Unit (ReLU) activation function in polynomial time. Despite the established convergence theory of Local SGD on optimizing general smooth functions in communication-efficient distributed optimization, its convergence on non-smooth ReLU networks still eludes full theoretical understanding. The key property used in many Local SGD analysis on smooth function is gradient Lipschitzness, so that the gradient on local models will not drift far away from that on averaged model. However, this decent property does not hold in networks with non-smooth ReLU activation function. We show that, even though ReLU network does not admit gradient Lipschitzness property, the difference between gradients on local models and average model will not change too much, under the dynamics of Local SGD. We validate our theoretical results via extensive experiments. This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-deng22a, title = { Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time }, author = {Deng, Yuyang and Mahdi Kamani, Mohammad and Mahdavi, Mehrdad}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {6840--6861}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/deng22a/deng22a.pdf}, url = {https://proceedings.mlr.press/v151/deng22a.html}, abstract = { In this paper we prove that Local (S)GD (or FedAvg) can optimize deep neural networks with Rectified Linear Unit (ReLU) activation function in polynomial time. Despite the established convergence theory of Local SGD on optimizing general smooth functions in communication-efficient distributed optimization, its convergence on non-smooth ReLU networks still eludes full theoretical understanding. The key property used in many Local SGD analysis on smooth function is gradient Lipschitzness, so that the gradient on local models will not drift far away from that on averaged model. However, this decent property does not hold in networks with non-smooth ReLU activation function. We show that, even though ReLU network does not admit gradient Lipschitzness property, the difference between gradients on local models and average model will not change too much, under the dynamics of Local SGD. We validate our theoretical results via extensive experiments. This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks. } }
Endnote
%0 Conference Paper %T Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time %A Yuyang Deng %A Mohammad Mahdi Kamani %A Mehrdad Mahdavi %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-deng22a %I PMLR %P 6840--6861 %U https://proceedings.mlr.press/v151/deng22a.html %V 151 %X In this paper we prove that Local (S)GD (or FedAvg) can optimize deep neural networks with Rectified Linear Unit (ReLU) activation function in polynomial time. Despite the established convergence theory of Local SGD on optimizing general smooth functions in communication-efficient distributed optimization, its convergence on non-smooth ReLU networks still eludes full theoretical understanding. The key property used in many Local SGD analysis on smooth function is gradient Lipschitzness, so that the gradient on local models will not drift far away from that on averaged model. However, this decent property does not hold in networks with non-smooth ReLU activation function. We show that, even though ReLU network does not admit gradient Lipschitzness property, the difference between gradients on local models and average model will not change too much, under the dynamics of Local SGD. We validate our theoretical results via extensive experiments. This work is the first to show the convergence of Local SGD on non-smooth functions, and will shed lights on the optimization theory of federated training of deep neural networks.
APA
Deng, Y., Mahdi Kamani, M. & Mahdavi, M.. (2022). Local SGD Optimizes Overparameterized Neural Networks in Polynomial Time . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:6840-6861 Available from https://proceedings.mlr.press/v151/deng22a.html.

Related Material