AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes

Rachel Ward, Xiaoxia Wu, Leon Bottou
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6677-6686, 2019.

Abstract

Adaptive gradient methods such as AdaGrad and its variants update the stepsize in stochastic gradient descent on the fly according to the gradients received along the way; such methods have gained widespread use in large-scale optimization for their ability to converge robustly, without the need to fine-tune parameters such as the stepsize schedule. Yet, the theoretical guarantees to date for AdaGrad are for online and convex optimization. We bridge this gap by providing strong theoretical guarantees for the convergence of AdaGrad over smooth, nonconvex landscapes. We show that the norm version of AdaGrad (AdaGrad-Norm) converges to a stationary point at the $\mathcal{O}(\log(N)/\sqrt{N})$ rate in the stochastic setting, and at the optimal $\mathcal{O}(1/N)$ rate in the batch (non-stochastic) setting – in this sense, our convergence guarantees are “sharp”. In particular, both our theoretical results and extensive numerical experiments imply that AdaGrad-Norm is robust to the unknown Lipschitz constant and level of stochastic noise on the gradient.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-ward19a, title = {{A}da{G}rad Stepsizes: Sharp Convergence Over Nonconvex Landscapes}, author = {Ward, Rachel and Wu, Xiaoxia and Bottou, Leon}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6677--6686}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/ward19a/ward19a.pdf}, url = {https://proceedings.mlr.press/v97/ward19a.html}, abstract = {Adaptive gradient methods such as AdaGrad and its variants update the stepsize in stochastic gradient descent on the fly according to the gradients received along the way; such methods have gained widespread use in large-scale optimization for their ability to converge robustly, without the need to fine-tune parameters such as the stepsize schedule. Yet, the theoretical guarantees to date for AdaGrad are for online and convex optimization. We bridge this gap by providing strong theoretical guarantees for the convergence of AdaGrad over smooth, nonconvex landscapes. We show that the norm version of AdaGrad (AdaGrad-Norm) converges to a stationary point at the $\mathcal{O}(\log(N)/\sqrt{N})$ rate in the stochastic setting, and at the optimal $\mathcal{O}(1/N)$ rate in the batch (non-stochastic) setting – in this sense, our convergence guarantees are “sharp”. In particular, both our theoretical results and extensive numerical experiments imply that AdaGrad-Norm is robust to the unknown Lipschitz constant and level of stochastic noise on the gradient.} }
Endnote
%0 Conference Paper %T AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes %A Rachel Ward %A Xiaoxia Wu %A Leon Bottou %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-ward19a %I PMLR %P 6677--6686 %U https://proceedings.mlr.press/v97/ward19a.html %V 97 %X Adaptive gradient methods such as AdaGrad and its variants update the stepsize in stochastic gradient descent on the fly according to the gradients received along the way; such methods have gained widespread use in large-scale optimization for their ability to converge robustly, without the need to fine-tune parameters such as the stepsize schedule. Yet, the theoretical guarantees to date for AdaGrad are for online and convex optimization. We bridge this gap by providing strong theoretical guarantees for the convergence of AdaGrad over smooth, nonconvex landscapes. We show that the norm version of AdaGrad (AdaGrad-Norm) converges to a stationary point at the $\mathcal{O}(\log(N)/\sqrt{N})$ rate in the stochastic setting, and at the optimal $\mathcal{O}(1/N)$ rate in the batch (non-stochastic) setting – in this sense, our convergence guarantees are “sharp”. In particular, both our theoretical results and extensive numerical experiments imply that AdaGrad-Norm is robust to the unknown Lipschitz constant and level of stochastic noise on the gradient.
APA
Ward, R., Wu, X. & Bottou, L.. (2019). AdaGrad Stepsizes: Sharp Convergence Over Nonconvex Landscapes. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6677-6686 Available from https://proceedings.mlr.press/v97/ward19a.html.

Related Material