Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints

Runchao Ma, Qihang Lin, Tianbao Yang
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:6554-6564, 2020.

Abstract

Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. Although many efficient methods have been developed with theoretical convergence guarantees for non-convex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with non-convex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are weakly convex and nonsmooth. Our methods solve a sequence of strongly convex subproblems, where a quadratic regularization term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slater’s condition, we establish the computation complexities of our methods for finding a nearly stationary point.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-ma20d, title = {Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints}, author = {Ma, Runchao and Lin, Qihang and Yang, Tianbao}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {6554--6564}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/ma20d/ma20d.pdf}, url = {https://proceedings.mlr.press/v119/ma20d.html}, abstract = {Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. Although many efficient methods have been developed with theoretical convergence guarantees for non-convex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with non-convex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are weakly convex and nonsmooth. Our methods solve a sequence of strongly convex subproblems, where a quadratic regularization term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slater’s condition, we establish the computation complexities of our methods for finding a nearly stationary point.} }
Endnote
%0 Conference Paper %T Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints %A Runchao Ma %A Qihang Lin %A Tianbao Yang %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-ma20d %I PMLR %P 6554--6564 %U https://proceedings.mlr.press/v119/ma20d.html %V 119 %X Optimization models with non-convex constraints arise in many tasks in machine learning, e.g., learning with fairness constraints or Neyman-Pearson classification with non-convex loss. Although many efficient methods have been developed with theoretical convergence guarantees for non-convex unconstrained problems, it remains a challenge to design provably efficient algorithms for problems with non-convex functional constraints. This paper proposes a class of subgradient methods for constrained optimization where the objective function and the constraint functions are weakly convex and nonsmooth. Our methods solve a sequence of strongly convex subproblems, where a quadratic regularization term is added to both the objective function and each constraint function. Each subproblem can be solved by various algorithms for strongly convex optimization. Under a uniform Slater’s condition, we establish the computation complexities of our methods for finding a nearly stationary point.
APA
Ma, R., Lin, Q. & Yang, T.. (2020). Quadratically Regularized Subgradient Methods for Weakly Convex Optimization with Weakly Convex Constraints. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:6554-6564 Available from https://proceedings.mlr.press/v119/ma20d.html.

Related Material