The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization

Matan Schliserman, Uri Sherman, Tomer Koren
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:1041-1107, 2025.

Abstract

We study the generalization performance of gradient methods in the fundamental stochastic convex optimization setting, focusing on its dimension dependence. First, for full-batch gradient descent (GD) we give a construction of a learning problem in dimension d=O(n2), where the canonical version of GD (tuned for optimal performance on the empirical risk) trained with n training examples converges, with constant probability, to an approximate empirical risk minimizer with Ω(1) population excess risk. Our bound translates to a lower bound of Ω(d) on the number of training examples required for standard GD to reach a non-trivial test error, answering an open question raised by Feldman (2016) and Amir, Koren and Livni (2021) and showing that a non-trivial dimension dependence is unavoidable. Furthermore, for standard one-pass stochastic gradient descent (SGD), we show that an application of the same construction technique provides a similar Ω(d) lower bound for the sample complexity of SGD to reach a non-trivial empirical error, despite achieving optimal test performance. This again provides for an exponential improvement in the dimension dependence compared to previous work (Koren et al., 2022), resolving an open question left therein.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-schliserman25a, title = {The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization}, author = {Schliserman, Matan and Sherman, Uri and Koren, Tomer}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {1041--1107}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/schliserman25a/schliserman25a.pdf}, url = {https://proceedings.mlr.press/v272/schliserman25a.html}, abstract = {We study the generalization performance of gradient methods in the fundamental stochastic convex optimization setting, focusing on its dimension dependence. First, for full-batch gradient descent (GD) we give a construction of a learning problem in dimension $d=O(n^2)$, where the canonical version of GD (tuned for optimal performance on the empirical risk) trained with $n$ training examples converges, with constant probability, to an approximate empirical risk minimizer with $\Omega(1)$ population excess risk. Our bound translates to a lower bound of $\smash{\Omega(\sqrt{d})}$ on the number of training examples required for standard GD to reach a non-trivial test error, answering an open question raised by Feldman (2016) and Amir, Koren and Livni (2021) and showing that a non-trivial dimension dependence is unavoidable. Furthermore, for standard one-pass stochastic gradient descent (SGD), we show that an application of the same construction technique provides a similar $\smash{\Omega(\sqrt{d})}$ lower bound for the sample complexity of SGD to reach a non-trivial empirical error, despite achieving optimal test performance. This again provides for an exponential improvement in the dimension dependence compared to previous work (Koren et al., 2022), resolving an open question left therein.} }
Endnote
%0 Conference Paper %T The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization %A Matan Schliserman %A Uri Sherman %A Tomer Koren %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-schliserman25a %I PMLR %P 1041--1107 %U https://proceedings.mlr.press/v272/schliserman25a.html %V 272 %X We study the generalization performance of gradient methods in the fundamental stochastic convex optimization setting, focusing on its dimension dependence. First, for full-batch gradient descent (GD) we give a construction of a learning problem in dimension $d=O(n^2)$, where the canonical version of GD (tuned for optimal performance on the empirical risk) trained with $n$ training examples converges, with constant probability, to an approximate empirical risk minimizer with $\Omega(1)$ population excess risk. Our bound translates to a lower bound of $\smash{\Omega(\sqrt{d})}$ on the number of training examples required for standard GD to reach a non-trivial test error, answering an open question raised by Feldman (2016) and Amir, Koren and Livni (2021) and showing that a non-trivial dimension dependence is unavoidable. Furthermore, for standard one-pass stochastic gradient descent (SGD), we show that an application of the same construction technique provides a similar $\smash{\Omega(\sqrt{d})}$ lower bound for the sample complexity of SGD to reach a non-trivial empirical error, despite achieving optimal test performance. This again provides for an exponential improvement in the dimension dependence compared to previous work (Koren et al., 2022), resolving an open question left therein.
APA
Schliserman, M., Sherman, U. & Koren, T.. (2025). The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:1041-1107 Available from https://proceedings.mlr.press/v272/schliserman25a.html.

Related Material