Error bounds, PL condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods

Feng-Yi Liao, Lijun Ding, Yang Zheng
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:993-1005, 2024.

Abstract

Many machine learning problems lack strong convexity properties. Fortunately, recent studies have revealed that first-order algorithms also enjoy linear convergences under various weaker regularity conditions. While the relationship among different conditions for convex and smooth functions is well understood, it is not the case for the nonsmooth setting. In this paper, we go beyond convexity and smoothness, and clarify the connections among common regularity conditions (including strong convexity, restricted secant inequality, subdifferential error bound, Polyak-Łojasiewic inequality, and quadratic growth) in the class of weakly convex functions. In addition, we present a simple and modular proof for the linear convergence of the proximal point method (PPM) for convex (possibly nonsmooth) optimization using these regularity conditions. The linear convergence also holds when the subproblems of PPM are solved inexactly with a proper control of inexactness.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-liao24a, title = {Error bounds, {PL} condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods}, author = {Liao, Feng-Yi and Ding, Lijun and Zheng, Yang}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {993--1005}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/liao24a/liao24a.pdf}, url = {https://proceedings.mlr.press/v242/liao24a.html}, abstract = {Many machine learning problems lack strong convexity properties. Fortunately, recent studies have revealed that first-order algorithms also enjoy linear convergences under various weaker regularity conditions. While the relationship among different conditions for convex and smooth functions is well understood, it is not the case for the nonsmooth setting. In this paper, we go beyond convexity and smoothness, and clarify the connections among common regularity conditions (including strong convexity, restricted secant inequality, subdifferential error bound, Polyak-Łojasiewic inequality, and quadratic growth) in the class of weakly convex functions. In addition, we present a simple and modular proof for the linear convergence of the proximal point method (PPM) for convex (possibly nonsmooth) optimization using these regularity conditions. The linear convergence also holds when the subproblems of PPM are solved inexactly with a proper control of inexactness.} }
Endnote
%0 Conference Paper %T Error bounds, PL condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods %A Feng-Yi Liao %A Lijun Ding %A Yang Zheng %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-liao24a %I PMLR %P 993--1005 %U https://proceedings.mlr.press/v242/liao24a.html %V 242 %X Many machine learning problems lack strong convexity properties. Fortunately, recent studies have revealed that first-order algorithms also enjoy linear convergences under various weaker regularity conditions. While the relationship among different conditions for convex and smooth functions is well understood, it is not the case for the nonsmooth setting. In this paper, we go beyond convexity and smoothness, and clarify the connections among common regularity conditions (including strong convexity, restricted secant inequality, subdifferential error bound, Polyak-Łojasiewic inequality, and quadratic growth) in the class of weakly convex functions. In addition, we present a simple and modular proof for the linear convergence of the proximal point method (PPM) for convex (possibly nonsmooth) optimization using these regularity conditions. The linear convergence also holds when the subproblems of PPM are solved inexactly with a proper control of inexactness.
APA
Liao, F., Ding, L. & Zheng, Y.. (2024). Error bounds, PL condition, and quadratic growth for weakly convex functions, and linear convergences of proximal point methods. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:993-1005 Available from https://proceedings.mlr.press/v242/liao24a.html.

Related Material