Information-Theoretic Generalization Bounds for Stochastic Gradient Descent

Gergely Neu, Gintare Karolina Dziugaite, Mahdi Haghifam, Daniel M. Roy
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:3526-3545, 2021.

Abstract

We study the generalization properties of the popular stochastic optimization method known as stochastic gradient descent (SGD) for optimizing general non-convex loss functions. Our main contribution is providing upper bounds on the generalization error that depend on local statistics of the stochastic gradients evaluated along the path of iterates calculated by SGD. The key factors our bounds depend on are the variance of the gradients (with respect to the data distribution) and the local smoothness of the objective function along the SGD path, and the sensitivity of the loss function to perturbations to the final output. Our key technical tool is combining the information-theoretic generalization bounds previously used for analyzing randomized variants of SGD with a perturbation analysis of the iterates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-neu21a, title = {Information-Theoretic Generalization Bounds for Stochastic Gradient Descent}, author = {Neu, Gergely and Dziugaite, Gintare Karolina and Haghifam, Mahdi and Roy, Daniel M.}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {3526--3545}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/neu21a/neu21a.pdf}, url = {https://proceedings.mlr.press/v134/neu21a.html}, abstract = {We study the generalization properties of the popular stochastic optimization method known as stochastic gradient descent (SGD) for optimizing general non-convex loss functions. Our main contribution is providing upper bounds on the generalization error that depend on local statistics of the stochastic gradients evaluated along the path of iterates calculated by SGD. The key factors our bounds depend on are the variance of the gradients (with respect to the data distribution) and the local smoothness of the objective function along the SGD path, and the sensitivity of the loss function to perturbations to the final output. Our key technical tool is combining the information-theoretic generalization bounds previously used for analyzing randomized variants of SGD with a perturbation analysis of the iterates.} }
Endnote
%0 Conference Paper %T Information-Theoretic Generalization Bounds for Stochastic Gradient Descent %A Gergely Neu %A Gintare Karolina Dziugaite %A Mahdi Haghifam %A Daniel M. Roy %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-neu21a %I PMLR %P 3526--3545 %U https://proceedings.mlr.press/v134/neu21a.html %V 134 %X We study the generalization properties of the popular stochastic optimization method known as stochastic gradient descent (SGD) for optimizing general non-convex loss functions. Our main contribution is providing upper bounds on the generalization error that depend on local statistics of the stochastic gradients evaluated along the path of iterates calculated by SGD. The key factors our bounds depend on are the variance of the gradients (with respect to the data distribution) and the local smoothness of the objective function along the SGD path, and the sensitivity of the loss function to perturbations to the final output. Our key technical tool is combining the information-theoretic generalization bounds previously used for analyzing randomized variants of SGD with a perturbation analysis of the iterates.
APA
Neu, G., Dziugaite, G.K., Haghifam, M. & Roy, D.M.. (2021). Information-Theoretic Generalization Bounds for Stochastic Gradient Descent. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:3526-3545 Available from https://proceedings.mlr.press/v134/neu21a.html.

Related Material