Train faster, generalize better: Stability of stochastic gradient descent

Moritz Hardt, Ben Recht, Yoram Singer
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1225-1234, 2016.

Abstract

We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-hardt16, title = {Train faster, generalize better: Stability of stochastic gradient descent}, author = {Hardt, Moritz and Recht, Ben and Singer, Yoram}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1225--1234}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/hardt16.pdf}, url = {https://proceedings.mlr.press/v48/hardt16.html}, abstract = {We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.} }
Endnote
%0 Conference Paper %T Train faster, generalize better: Stability of stochastic gradient descent %A Moritz Hardt %A Ben Recht %A Yoram Singer %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-hardt16 %I PMLR %P 1225--1234 %U https://proceedings.mlr.press/v48/hardt16.html %V 48 %X We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit.
RIS
TY - CPAPER TI - Train faster, generalize better: Stability of stochastic gradient descent AU - Moritz Hardt AU - Ben Recht AU - Yoram Singer BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-hardt16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1225 EP - 1234 L1 - http://proceedings.mlr.press/v48/hardt16.pdf UR - https://proceedings.mlr.press/v48/hardt16.html AB - We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically stable in the sense of Bousquet and Elisseeff. Our analysis only employs elementary tools from convex and continuous optimization. We derive stability bounds for both convex and non-convex optimization under standard Lipschitz and smoothness assumptions. Applying our results to the convex case, we provide new insights for why multiple epochs of stochastic gradient methods generalize well in practice. In the non-convex case, we give a new interpretation of common practices in neural networks, and formally show that popular techniques for training large deep models are indeed stability-promoting. Our findings conceptually underscore the importance of reducing training time beyond its obvious benefit. ER -
APA
Hardt, M., Recht, B. & Singer, Y.. (2016). Train faster, generalize better: Stability of stochastic gradient descent. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1225-1234 Available from https://proceedings.mlr.press/v48/hardt16.html.

Related Material