SGD Generalizes Better Than GD (And Regularization Doesn’t Help)

Idan Amir, Tomer Koren, Roi Livni
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:63-92, 2021.

Abstract

We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model. While for SGD it is well-known that $O(1/\epsilon^2)$ iterations suffice for obtaining a solution with $\epsilon$ excess expected risk, we show that with the same number of steps GD may overfit and emit a solution with $\Omega(1)$ generalization error. Moreover, we show that in fact $\Omega(1/\epsilon^4)$ iterations are necessary for GD to match the generalization performance of SGD, which is also tight due to recent work by Bassily et al. (2020). We further discuss how regularizing the empirical risk minimized by GD essentially does not change the above result, and revisit the concepts of stability, implicit bias and the role of the learning algorithm in generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-amir21a, title = {SGD Generalizes Better Than GD (And Regularization Doesn’t Help)}, author = {Amir, Idan and Koren, Tomer and Livni, Roi}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {63--92}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/amir21a/amir21a.pdf}, url = {https://proceedings.mlr.press/v134/amir21a.html}, abstract = {We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model. While for SGD it is well-known that $O(1/\epsilon^2)$ iterations suffice for obtaining a solution with $\epsilon$ excess expected risk, we show that with the same number of steps GD may overfit and emit a solution with $\Omega(1)$ generalization error. Moreover, we show that in fact $\Omega(1/\epsilon^4)$ iterations are necessary for GD to match the generalization performance of SGD, which is also tight due to recent work by Bassily et al. (2020). We further discuss how regularizing the empirical risk minimized by GD essentially does not change the above result, and revisit the concepts of stability, implicit bias and the role of the learning algorithm in generalization.} }
Endnote
%0 Conference Paper %T SGD Generalizes Better Than GD (And Regularization Doesn’t Help) %A Idan Amir %A Tomer Koren %A Roi Livni %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-amir21a %I PMLR %P 63--92 %U https://proceedings.mlr.press/v134/amir21a.html %V 134 %X We give a new separation result between the generalization performance of stochastic gradient descent (SGD) and of full-batch gradient descent (GD) in the fundamental stochastic convex optimization model. While for SGD it is well-known that $O(1/\epsilon^2)$ iterations suffice for obtaining a solution with $\epsilon$ excess expected risk, we show that with the same number of steps GD may overfit and emit a solution with $\Omega(1)$ generalization error. Moreover, we show that in fact $\Omega(1/\epsilon^4)$ iterations are necessary for GD to match the generalization performance of SGD, which is also tight due to recent work by Bassily et al. (2020). We further discuss how regularizing the empirical risk minimized by GD essentially does not change the above result, and revisit the concepts of stability, implicit bias and the role of the learning algorithm in generalization.
APA
Amir, I., Koren, T. & Livni, R.. (2021). SGD Generalizes Better Than GD (And Regularization Doesn’t Help). Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:63-92 Available from https://proceedings.mlr.press/v134/amir21a.html.

Related Material