A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD

Ruinan Jin, Xiao Li, Yaoliang Yu, Baoxiang Wang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:27979-28030, 2025.

Abstract

Adaptive moment estimation (Adam) is a cornerstone optimization algorithm in deep learning, widely recognized for its flexibility with adaptive learning rates and efficiency in handling large-scale data. However, despite its practical success, the theoretical understanding of Adam’s convergence has been constrained by stringent assumptions, such as almost surely bounded stochastic gradients or uniformly bounded gradients, which are more restrictive than those typically required for analyzing stochastic gradient descent (SGD). In this paper, we introduce a novel and comprehensive framework for analyzing the convergence properties of Adam. This framework offers a versatile approach to establishing Adam’s convergence. Specifically, we prove that Adam achieves asymptotic (last iterate sense) convergence in both the almost sure sense and the $L_1$ sense under the relaxed assumptions typically used for SGD, namely $L$-smoothness and the ABC inequality. Meanwhile, under the same assumptions, we show that Adam attains non-asymptotic sample complexity bounds similar to those of SGD.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-jin25c, title = {A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with {SGD}}, author = {Jin, Ruinan and Li, Xiao and Yu, Yaoliang and Wang, Baoxiang}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {27979--28030}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/jin25c/jin25c.pdf}, url = {https://proceedings.mlr.press/v267/jin25c.html}, abstract = {Adaptive moment estimation (Adam) is a cornerstone optimization algorithm in deep learning, widely recognized for its flexibility with adaptive learning rates and efficiency in handling large-scale data. However, despite its practical success, the theoretical understanding of Adam’s convergence has been constrained by stringent assumptions, such as almost surely bounded stochastic gradients or uniformly bounded gradients, which are more restrictive than those typically required for analyzing stochastic gradient descent (SGD). In this paper, we introduce a novel and comprehensive framework for analyzing the convergence properties of Adam. This framework offers a versatile approach to establishing Adam’s convergence. Specifically, we prove that Adam achieves asymptotic (last iterate sense) convergence in both the almost sure sense and the $L_1$ sense under the relaxed assumptions typically used for SGD, namely $L$-smoothness and the ABC inequality. Meanwhile, under the same assumptions, we show that Adam attains non-asymptotic sample complexity bounds similar to those of SGD.} }
Endnote
%0 Conference Paper %T A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD %A Ruinan Jin %A Xiao Li %A Yaoliang Yu %A Baoxiang Wang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-jin25c %I PMLR %P 27979--28030 %U https://proceedings.mlr.press/v267/jin25c.html %V 267 %X Adaptive moment estimation (Adam) is a cornerstone optimization algorithm in deep learning, widely recognized for its flexibility with adaptive learning rates and efficiency in handling large-scale data. However, despite its practical success, the theoretical understanding of Adam’s convergence has been constrained by stringent assumptions, such as almost surely bounded stochastic gradients or uniformly bounded gradients, which are more restrictive than those typically required for analyzing stochastic gradient descent (SGD). In this paper, we introduce a novel and comprehensive framework for analyzing the convergence properties of Adam. This framework offers a versatile approach to establishing Adam’s convergence. Specifically, we prove that Adam achieves asymptotic (last iterate sense) convergence in both the almost sure sense and the $L_1$ sense under the relaxed assumptions typically used for SGD, namely $L$-smoothness and the ABC inequality. Meanwhile, under the same assumptions, we show that Adam attains non-asymptotic sample complexity bounds similar to those of SGD.
APA
Jin, R., Li, X., Yu, Y. & Wang, B.. (2025). A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:27979-28030 Available from https://proceedings.mlr.press/v267/jin25c.html.

Related Material