Understanding the Generalization Benefits of Late Learning Rate Decay

Yinuo Ren, Chao Ma, Lexing Ying
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4465-4473, 2024.

Abstract

Why do neural networks trained with large learning rates for longer time often lead to better generalization? In this paper, we delve into this question by examining the relation between training and testing loss in neural networks. Through visualization of these losses, we note that the training trajectory with a large learning rate navigates through the minima manifold of the training loss, finally nearing the neighborhood of the testing loss minimum. Motivated by these findings, we introduce a nonlinear model whose loss landscapes mirror those observed for real neural networks. Upon investigating the training process using SGD on our model, we demonstrate that an extended phase with a large learning rate steers our model towards the minimum norm solution of the training loss, which may achieve near-optimal generalization, thereby affirming the empirically observed benefits of late learning rate decay.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-ren24c, title = {Understanding the Generalization Benefits of Late Learning Rate Decay}, author = {Ren, Yinuo and Ma, Chao and Ying, Lexing}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4465--4473}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/ren24c/ren24c.pdf}, url = {https://proceedings.mlr.press/v238/ren24c.html}, abstract = {Why do neural networks trained with large learning rates for longer time often lead to better generalization? In this paper, we delve into this question by examining the relation between training and testing loss in neural networks. Through visualization of these losses, we note that the training trajectory with a large learning rate navigates through the minima manifold of the training loss, finally nearing the neighborhood of the testing loss minimum. Motivated by these findings, we introduce a nonlinear model whose loss landscapes mirror those observed for real neural networks. Upon investigating the training process using SGD on our model, we demonstrate that an extended phase with a large learning rate steers our model towards the minimum norm solution of the training loss, which may achieve near-optimal generalization, thereby affirming the empirically observed benefits of late learning rate decay.} }
Endnote
%0 Conference Paper %T Understanding the Generalization Benefits of Late Learning Rate Decay %A Yinuo Ren %A Chao Ma %A Lexing Ying %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-ren24c %I PMLR %P 4465--4473 %U https://proceedings.mlr.press/v238/ren24c.html %V 238 %X Why do neural networks trained with large learning rates for longer time often lead to better generalization? In this paper, we delve into this question by examining the relation between training and testing loss in neural networks. Through visualization of these losses, we note that the training trajectory with a large learning rate navigates through the minima manifold of the training loss, finally nearing the neighborhood of the testing loss minimum. Motivated by these findings, we introduce a nonlinear model whose loss landscapes mirror those observed for real neural networks. Upon investigating the training process using SGD on our model, we demonstrate that an extended phase with a large learning rate steers our model towards the minimum norm solution of the training loss, which may achieve near-optimal generalization, thereby affirming the empirically observed benefits of late learning rate decay.
APA
Ren, Y., Ma, C. & Ying, L.. (2024). Understanding the Generalization Benefits of Late Learning Rate Decay. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4465-4473 Available from https://proceedings.mlr.press/v238/ren24c.html.

Related Material