Non-convex Stochastic Composite Optimization with Polyak Momentum

Yuan Gao, Anton Rodomanov, Sebastian U Stich
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:14826-14843, 2024.

Abstract

The stochastic proximal gradient method is a powerful generalization of the widely used stochastic gradient descent (SGD) method and has found numerous applications in Machine Learning. However, it is notoriously known that this method fails to converge in non-convex settings where the stochastic noise is significant (i.e. when only small or bounded batch sizes are used). In this paper, we focus on the stochastic proximal gradient method with Polyak momentum. We prove this method attains an optimal convergence rate for non-convex composite optimization problems, regardless of batch size. Additionally, we rigorously analyze the variance reduction effect of the Polyak momentum in the composite optimization setting and we show the method also converges when the proximal step can only be solved inexactly. Finally, we provide numerical experiments to validate our theoretical results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-gao24l, title = {Non-convex Stochastic Composite Optimization with Polyak Momentum}, author = {Gao, Yuan and Rodomanov, Anton and Stich, Sebastian U}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {14826--14843}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24l/gao24l.pdf}, url = {https://proceedings.mlr.press/v235/gao24l.html}, abstract = {The stochastic proximal gradient method is a powerful generalization of the widely used stochastic gradient descent (SGD) method and has found numerous applications in Machine Learning. However, it is notoriously known that this method fails to converge in non-convex settings where the stochastic noise is significant (i.e. when only small or bounded batch sizes are used). In this paper, we focus on the stochastic proximal gradient method with Polyak momentum. We prove this method attains an optimal convergence rate for non-convex composite optimization problems, regardless of batch size. Additionally, we rigorously analyze the variance reduction effect of the Polyak momentum in the composite optimization setting and we show the method also converges when the proximal step can only be solved inexactly. Finally, we provide numerical experiments to validate our theoretical results.} }
Endnote
%0 Conference Paper %T Non-convex Stochastic Composite Optimization with Polyak Momentum %A Yuan Gao %A Anton Rodomanov %A Sebastian U Stich %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-gao24l %I PMLR %P 14826--14843 %U https://proceedings.mlr.press/v235/gao24l.html %V 235 %X The stochastic proximal gradient method is a powerful generalization of the widely used stochastic gradient descent (SGD) method and has found numerous applications in Machine Learning. However, it is notoriously known that this method fails to converge in non-convex settings where the stochastic noise is significant (i.e. when only small or bounded batch sizes are used). In this paper, we focus on the stochastic proximal gradient method with Polyak momentum. We prove this method attains an optimal convergence rate for non-convex composite optimization problems, regardless of batch size. Additionally, we rigorously analyze the variance reduction effect of the Polyak momentum in the composite optimization setting and we show the method also converges when the proximal step can only be solved inexactly. Finally, we provide numerical experiments to validate our theoretical results.
APA
Gao, Y., Rodomanov, A. & Stich, S.U.. (2024). Non-convex Stochastic Composite Optimization with Polyak Momentum. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:14826-14843 Available from https://proceedings.mlr.press/v235/gao24l.html.

Related Material