Stochastic Weakly Convex Optimization beyond Lipschitz Continuity

Wenzhi Gao, Qi Deng
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:14651-14680, 2024.

Abstract

This paper considers stochastic weakly convex optimization without the standard Lipschitz continuity assumption. Based on new adaptive regularization (stepsize) strategies, we show that a wide class of stochastic algorithms, including the stochastic subgradient method, preserve the $\mathcal{O} ( 1 / \sqrt{K})$ convergence rate with constant failure rate. Our analyses rest on rather weak assumptions: the Lipschitz parameter can be either bounded by a general growth function of $\\|x\\|$ or locally estimated through independent random samples. Numerical experiments demonstrate the efficiency and robustness of our proposed stepsize policies.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-gao24d, title = {Stochastic Weakly Convex Optimization beyond {L}ipschitz Continuity}, author = {Gao, Wenzhi and Deng, Qi}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {14651--14680}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/gao24d/gao24d.pdf}, url = {https://proceedings.mlr.press/v235/gao24d.html}, abstract = {This paper considers stochastic weakly convex optimization without the standard Lipschitz continuity assumption. Based on new adaptive regularization (stepsize) strategies, we show that a wide class of stochastic algorithms, including the stochastic subgradient method, preserve the $\mathcal{O} ( 1 / \sqrt{K})$ convergence rate with constant failure rate. Our analyses rest on rather weak assumptions: the Lipschitz parameter can be either bounded by a general growth function of $\\|x\\|$ or locally estimated through independent random samples. Numerical experiments demonstrate the efficiency and robustness of our proposed stepsize policies.} }
Endnote
%0 Conference Paper %T Stochastic Weakly Convex Optimization beyond Lipschitz Continuity %A Wenzhi Gao %A Qi Deng %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-gao24d %I PMLR %P 14651--14680 %U https://proceedings.mlr.press/v235/gao24d.html %V 235 %X This paper considers stochastic weakly convex optimization without the standard Lipschitz continuity assumption. Based on new adaptive regularization (stepsize) strategies, we show that a wide class of stochastic algorithms, including the stochastic subgradient method, preserve the $\mathcal{O} ( 1 / \sqrt{K})$ convergence rate with constant failure rate. Our analyses rest on rather weak assumptions: the Lipschitz parameter can be either bounded by a general growth function of $\\|x\\|$ or locally estimated through independent random samples. Numerical experiments demonstrate the efficiency and robustness of our proposed stepsize policies.
APA
Gao, W. & Deng, Q.. (2024). Stochastic Weakly Convex Optimization beyond Lipschitz Continuity. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:14651-14680 Available from https://proceedings.mlr.press/v235/gao24d.html.

Related Material