The Complexity of Making the Gradient Small in Stochastic Convex Optimization

Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan, Blake Woodworth
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:1319-1345, 2019.

Abstract

We give nearly matching upper and lower bounds on the oracle complexity of finding $\epsilon$-stationary points $(\|\nabla F(x)\|\leq\epsilon$ in stochastic convex optimization. We jointly analyze the oracle complexity in both the local stochastic oracle model and the global oracle (or, statistical learning) model. This allows us to decompose the complexity of finding near-stationary points into optimization complexity and sample complexity, and reveals some surprising differences between the complexity of stochastic optimization versus learning. Notably, we show that in the global oracle/statistical learning model, only logarithmic dependence on smoothness is required to find a near-stationary point, whereas polynomial dependence on smoothness is necessary in the local stochastic oracle model. In other words, the separation in complexity between the two models can be exponential, and the folklore understanding that smoothness is required to find stationary points is only weakly true for statistical learning. Our upper bounds are based on extensions of a recent “recursive regularization” technique proposed by Allen-Zhu (2018). We show how to extend the technique to achieve near-optimal rates, and in particular show how to leverage the extra information available in the global oracle model. Our algorithm for the global model can be implemented efficiently through finite sum methods, and suggests an interesting new computational-statistical tradeoff.

Cite this Paper


BibTeX
@InProceedings{pmlr-v99-foster19b, title = {The Complexity of Making the Gradient Small in Stochastic Convex Optimization}, author = {Foster, Dylan J. and Sekhari, Ayush and Shamir, Ohad and Srebro, Nathan and Sridharan, Karthik and Woodworth, Blake}, booktitle = {Proceedings of the Thirty-Second Conference on Learning Theory}, pages = {1319--1345}, year = {2019}, editor = {Beygelzimer, Alina and Hsu, Daniel}, volume = {99}, series = {Proceedings of Machine Learning Research}, month = {25--28 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v99/foster19b/foster19b.pdf}, url = {https://proceedings.mlr.press/v99/foster19b.html}, abstract = {We give nearly matching upper and lower bounds on the oracle complexity of finding $\epsilon$-stationary points $(\|\nabla F(x)\|\leq\epsilon$ in stochastic convex optimization. We jointly analyze the oracle complexity in both the local stochastic oracle model and the global oracle (or, statistical learning) model. This allows us to decompose the complexity of finding near-stationary points into optimization complexity and sample complexity, and reveals some surprising differences between the complexity of stochastic optimization versus learning. Notably, we show that in the global oracle/statistical learning model, only logarithmic dependence on smoothness is required to find a near-stationary point, whereas polynomial dependence on smoothness is necessary in the local stochastic oracle model. In other words, the separation in complexity between the two models can be exponential, and the folklore understanding that smoothness is required to find stationary points is only weakly true for statistical learning. Our upper bounds are based on extensions of a recent “recursive regularization” technique proposed by Allen-Zhu (2018). We show how to extend the technique to achieve near-optimal rates, and in particular show how to leverage the extra information available in the global oracle model. Our algorithm for the global model can be implemented efficiently through finite sum methods, and suggests an interesting new computational-statistical tradeoff.} }
Endnote
%0 Conference Paper %T The Complexity of Making the Gradient Small in Stochastic Convex Optimization %A Dylan J. Foster %A Ayush Sekhari %A Ohad Shamir %A Nathan Srebro %A Karthik Sridharan %A Blake Woodworth %B Proceedings of the Thirty-Second Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2019 %E Alina Beygelzimer %E Daniel Hsu %F pmlr-v99-foster19b %I PMLR %P 1319--1345 %U https://proceedings.mlr.press/v99/foster19b.html %V 99 %X We give nearly matching upper and lower bounds on the oracle complexity of finding $\epsilon$-stationary points $(\|\nabla F(x)\|\leq\epsilon$ in stochastic convex optimization. We jointly analyze the oracle complexity in both the local stochastic oracle model and the global oracle (or, statistical learning) model. This allows us to decompose the complexity of finding near-stationary points into optimization complexity and sample complexity, and reveals some surprising differences between the complexity of stochastic optimization versus learning. Notably, we show that in the global oracle/statistical learning model, only logarithmic dependence on smoothness is required to find a near-stationary point, whereas polynomial dependence on smoothness is necessary in the local stochastic oracle model. In other words, the separation in complexity between the two models can be exponential, and the folklore understanding that smoothness is required to find stationary points is only weakly true for statistical learning. Our upper bounds are based on extensions of a recent “recursive regularization” technique proposed by Allen-Zhu (2018). We show how to extend the technique to achieve near-optimal rates, and in particular show how to leverage the extra information available in the global oracle model. Our algorithm for the global model can be implemented efficiently through finite sum methods, and suggests an interesting new computational-statistical tradeoff.
APA
Foster, D.J., Sekhari, A., Shamir, O., Srebro, N., Sridharan, K. & Woodworth, B.. (2019). The Complexity of Making the Gradient Small in Stochastic Convex Optimization. Proceedings of the Thirty-Second Conference on Learning Theory, in Proceedings of Machine Learning Research 99:1319-1345 Available from https://proceedings.mlr.press/v99/foster19b.html.

Related Material