Convergence Rates of a Momentum Algorithm with Bounded Adaptive Step Size for Nonconvex Optimization

Anas Barakat, Pascal Bianchi
Proceedings of The 12th Asian Conference on Machine Learning, PMLR 129:225-240, 2020.

Abstract

Although Adam is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Several variants of Adam have been proposed to circumvent this convergence issue. In this work, we study the Adam algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate. The bound on the adaptive step size depends on the Lipschitz constant of the gradient of the objective function and provides safe theoretical adaptive step sizes. Under this boundedness assumption, we show a novel first order convergence rate result in both deterministic and stochastic contexts. Furthermore, we establish convergence rates of the function value sequence using the Kurdyka-Lojasiewicz property.

Cite this Paper


BibTeX
@InProceedings{pmlr-v129-barakat20a, title = {Convergence Rates of a Momentum Algorithm with Bounded Adaptive Step Size for Nonconvex Optimization}, author = {Barakat, Anas and Bianchi, Pascal}, booktitle = {Proceedings of The 12th Asian Conference on Machine Learning}, pages = {225--240}, year = {2020}, editor = {Pan, Sinno Jialin and Sugiyama, Masashi}, volume = {129}, series = {Proceedings of Machine Learning Research}, month = {18--20 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v129/barakat20a/barakat20a.pdf}, url = {https://proceedings.mlr.press/v129/barakat20a.html}, abstract = {Although Adam is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Several variants of Adam have been proposed to circumvent this convergence issue. In this work, we study the Adam algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate. The bound on the adaptive step size depends on the Lipschitz constant of the gradient of the objective function and provides safe theoretical adaptive step sizes. Under this boundedness assumption, we show a novel first order convergence rate result in both deterministic and stochastic contexts. Furthermore, we establish convergence rates of the function value sequence using the Kurdyka-Lojasiewicz property.} }
Endnote
%0 Conference Paper %T Convergence Rates of a Momentum Algorithm with Bounded Adaptive Step Size for Nonconvex Optimization %A Anas Barakat %A Pascal Bianchi %B Proceedings of The 12th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Sinno Jialin Pan %E Masashi Sugiyama %F pmlr-v129-barakat20a %I PMLR %P 225--240 %U https://proceedings.mlr.press/v129/barakat20a.html %V 129 %X Although Adam is a very popular algorithm for optimizing the weights of neural networks, it has been recently shown that it can diverge even in simple convex optimization examples. Several variants of Adam have been proposed to circumvent this convergence issue. In this work, we study the Adam algorithm for smooth nonconvex optimization under a boundedness assumption on the adaptive learning rate. The bound on the adaptive step size depends on the Lipschitz constant of the gradient of the objective function and provides safe theoretical adaptive step sizes. Under this boundedness assumption, we show a novel first order convergence rate result in both deterministic and stochastic contexts. Furthermore, we establish convergence rates of the function value sequence using the Kurdyka-Lojasiewicz property.
APA
Barakat, A. & Bianchi, P.. (2020). Convergence Rates of a Momentum Algorithm with Bounded Adaptive Step Size for Nonconvex Optimization. Proceedings of The 12th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 129:225-240 Available from https://proceedings.mlr.press/v129/barakat20a.html.

Related Material