The Complexity of Finding Stationary Points with Stochastic Gradient Descent

Yoel Drori, Ohad Shamir
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:2658-2667, 2020.

Abstract

We study the iteration complexity of stochastic gradient descent (SGD) for minimizing the gradient norm of smooth, possibly nonconvex functions. We provide several results, implying that the classical $\mathcal{O}(\epsilon^{-4})$ upper bound (for making the average gradient norm less than $\epsilon$) cannot be improved upon, unless a combination of additional assumptions is made. Notably, this holds even if we limit ourselves to convex quadratic functions. We also show that for nonconvex functions, the feasibility of minimizing gradients with SGD is surprisingly sensitive to the choice of optimality criteria.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-drori20a, title = {The Complexity of Finding Stationary Points with Stochastic Gradient Descent}, author = {Drori, Yoel and Shamir, Ohad}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {2658--2667}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/drori20a/drori20a.pdf}, url = {http://proceedings.mlr.press/v119/drori20a.html}, abstract = {We study the iteration complexity of stochastic gradient descent (SGD) for minimizing the gradient norm of smooth, possibly nonconvex functions. We provide several results, implying that the classical $\mathcal{O}(\epsilon^{-4})$ upper bound (for making the average gradient norm less than $\epsilon$) cannot be improved upon, unless a combination of additional assumptions is made. Notably, this holds even if we limit ourselves to convex quadratic functions. We also show that for nonconvex functions, the feasibility of minimizing gradients with SGD is surprisingly sensitive to the choice of optimality criteria.} }
Endnote
%0 Conference Paper %T The Complexity of Finding Stationary Points with Stochastic Gradient Descent %A Yoel Drori %A Ohad Shamir %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-drori20a %I PMLR %P 2658--2667 %U http://proceedings.mlr.press/v119/drori20a.html %V 119 %X We study the iteration complexity of stochastic gradient descent (SGD) for minimizing the gradient norm of smooth, possibly nonconvex functions. We provide several results, implying that the classical $\mathcal{O}(\epsilon^{-4})$ upper bound (for making the average gradient norm less than $\epsilon$) cannot be improved upon, unless a combination of additional assumptions is made. Notably, this holds even if we limit ourselves to convex quadratic functions. We also show that for nonconvex functions, the feasibility of minimizing gradients with SGD is surprisingly sensitive to the choice of optimality criteria.
APA
Drori, Y. & Shamir, O.. (2020). The Complexity of Finding Stationary Points with Stochastic Gradient Descent. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:2658-2667 Available from http://proceedings.mlr.press/v119/drori20a.html.

Related Material