The sample complexity of ERMs in stochastic convex optimization

Daniel Carmon, Amir Yehudayoff, Roi Livni
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:3799-3807, 2024.

Abstract

Stochastic convex optimization is one of the most well-studied models for learning in modern machine learning. Nevertheless, a central fundamental question in this setup remained unresolved: how many data points must be observed so that any empirical risk minimizer (ERM) shows good performance on the true population? This question was proposed by Feldman who proved that $\Omega(\frac{d}{\epsilon} + \frac{1}{\epsilon^2} )$ data points are necessary (where $d$ is the dimension and $\epsilon > 0$ the accuracy parameter). Proving an $\omega(\frac{d}{\epsilon} + \frac{1}{\epsilon^2})$ lower bound was left as an open problem. In this work we show that in fact $\tilde{O}(\frac{d}{\epsilon} + \frac{1}{\epsilon^2})$ data points are also sufficient. This settles the question and yields a new separation between ERMs and uniform convergence. This sample complexity holds for the classical setup of learning bounded convex Lipschitz functions over the Euclidean unit ball. We further generalize the result and show that a similar upper bound holds for all symmetric convex bodies. The general bound is composed of two terms: (i) a term of the form $\tilde{O}(\frac{d}{\epsilon})$ with an inverse-linear dependence on the accuracy parameter, and (ii) a term that depends on the statistical complexity of the class of linear functions (captured by the Rademacher complexity). The proof builds a mechanism for controlling the behavior of stochastic convex optimization problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-carmon24a, title = {The sample complexity of {ERM}s in stochastic convex optimization}, author = {Carmon, Daniel and Yehudayoff, Amir and Livni, Roi}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {3799--3807}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/carmon24a/carmon24a.pdf}, url = {https://proceedings.mlr.press/v238/carmon24a.html}, abstract = {Stochastic convex optimization is one of the most well-studied models for learning in modern machine learning. Nevertheless, a central fundamental question in this setup remained unresolved: how many data points must be observed so that any empirical risk minimizer (ERM) shows good performance on the true population? This question was proposed by Feldman who proved that $\Omega(\frac{d}{\epsilon} + \frac{1}{\epsilon^2} )$ data points are necessary (where $d$ is the dimension and $\epsilon > 0$ the accuracy parameter). Proving an $\omega(\frac{d}{\epsilon} + \frac{1}{\epsilon^2})$ lower bound was left as an open problem. In this work we show that in fact $\tilde{O}(\frac{d}{\epsilon} + \frac{1}{\epsilon^2})$ data points are also sufficient. This settles the question and yields a new separation between ERMs and uniform convergence. This sample complexity holds for the classical setup of learning bounded convex Lipschitz functions over the Euclidean unit ball. We further generalize the result and show that a similar upper bound holds for all symmetric convex bodies. The general bound is composed of two terms: (i) a term of the form $\tilde{O}(\frac{d}{\epsilon})$ with an inverse-linear dependence on the accuracy parameter, and (ii) a term that depends on the statistical complexity of the class of linear functions (captured by the Rademacher complexity). The proof builds a mechanism for controlling the behavior of stochastic convex optimization problems.} }
Endnote
%0 Conference Paper %T The sample complexity of ERMs in stochastic convex optimization %A Daniel Carmon %A Amir Yehudayoff %A Roi Livni %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-carmon24a %I PMLR %P 3799--3807 %U https://proceedings.mlr.press/v238/carmon24a.html %V 238 %X Stochastic convex optimization is one of the most well-studied models for learning in modern machine learning. Nevertheless, a central fundamental question in this setup remained unresolved: how many data points must be observed so that any empirical risk minimizer (ERM) shows good performance on the true population? This question was proposed by Feldman who proved that $\Omega(\frac{d}{\epsilon} + \frac{1}{\epsilon^2} )$ data points are necessary (where $d$ is the dimension and $\epsilon > 0$ the accuracy parameter). Proving an $\omega(\frac{d}{\epsilon} + \frac{1}{\epsilon^2})$ lower bound was left as an open problem. In this work we show that in fact $\tilde{O}(\frac{d}{\epsilon} + \frac{1}{\epsilon^2})$ data points are also sufficient. This settles the question and yields a new separation between ERMs and uniform convergence. This sample complexity holds for the classical setup of learning bounded convex Lipschitz functions over the Euclidean unit ball. We further generalize the result and show that a similar upper bound holds for all symmetric convex bodies. The general bound is composed of two terms: (i) a term of the form $\tilde{O}(\frac{d}{\epsilon})$ with an inverse-linear dependence on the accuracy parameter, and (ii) a term that depends on the statistical complexity of the class of linear functions (captured by the Rademacher complexity). The proof builds a mechanism for controlling the behavior of stochastic convex optimization problems.
APA
Carmon, D., Yehudayoff, A. & Livni, R.. (2024). The sample complexity of ERMs in stochastic convex optimization. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:3799-3807 Available from https://proceedings.mlr.press/v238/carmon24a.html.

Related Material