Rate-improved inexact augmented Lagrangian method for constrained nonconvex optimization

Zichong Li, Pin-Yu Chen, Sijia Liu, Songtao Lu, Yangyang Xu
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:2170-2178, 2021.

Abstract

First-order methods have been studied for nonlinear constrained optimization within the framework of the augmented Lagrangian method (ALM) or penalty method. We propose an improved inexact ALM (iALM) and conduct a unified analysis for nonconvex problems with either convex or nonconvex constraints. Under certain regularity conditions (that are also assumed by existing works), we show an $\tilde{O}(\varepsilon^{-\frac{5}{2}})$ complexity result for a problem with a nonconvex objective and convex constraints and an $\tilde{O}(\varepsilon^{-3})$ complexity result for a problem with a nonconvex objective and nonconvex constraints, where the complexity is measured by the number of first-order oracles to yield an $\varepsilon$-KKT solution. Both results are the best known. The same-order complexity results have been achieved by penalty methods. However, two different analysis techniques are used to obtain the results, and more importantly, the penalty methods generally perform significantly worse than iALM in practice. Our improved iALM and analysis close the gap between theory and practice. Numerical experiments on nonconvex problems with convex or nonconvex constraints are provided to demonstrate the effectiveness of our proposed method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-li21d, title = { Rate-improved inexact augmented Lagrangian method for constrained nonconvex optimization }, author = {Li, Zichong and Chen, Pin-Yu and Liu, Sijia and Lu, Songtao and Xu, Yangyang}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {2170--2178}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/li21d/li21d.pdf}, url = {https://proceedings.mlr.press/v130/li21d.html}, abstract = { First-order methods have been studied for nonlinear constrained optimization within the framework of the augmented Lagrangian method (ALM) or penalty method. We propose an improved inexact ALM (iALM) and conduct a unified analysis for nonconvex problems with either convex or nonconvex constraints. Under certain regularity conditions (that are also assumed by existing works), we show an $\tilde{O}(\varepsilon^{-\frac{5}{2}})$ complexity result for a problem with a nonconvex objective and convex constraints and an $\tilde{O}(\varepsilon^{-3})$ complexity result for a problem with a nonconvex objective and nonconvex constraints, where the complexity is measured by the number of first-order oracles to yield an $\varepsilon$-KKT solution. Both results are the best known. The same-order complexity results have been achieved by penalty methods. However, two different analysis techniques are used to obtain the results, and more importantly, the penalty methods generally perform significantly worse than iALM in practice. Our improved iALM and analysis close the gap between theory and practice. Numerical experiments on nonconvex problems with convex or nonconvex constraints are provided to demonstrate the effectiveness of our proposed method. } }
Endnote
%0 Conference Paper %T Rate-improved inexact augmented Lagrangian method for constrained nonconvex optimization %A Zichong Li %A Pin-Yu Chen %A Sijia Liu %A Songtao Lu %A Yangyang Xu %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-li21d %I PMLR %P 2170--2178 %U https://proceedings.mlr.press/v130/li21d.html %V 130 %X First-order methods have been studied for nonlinear constrained optimization within the framework of the augmented Lagrangian method (ALM) or penalty method. We propose an improved inexact ALM (iALM) and conduct a unified analysis for nonconvex problems with either convex or nonconvex constraints. Under certain regularity conditions (that are also assumed by existing works), we show an $\tilde{O}(\varepsilon^{-\frac{5}{2}})$ complexity result for a problem with a nonconvex objective and convex constraints and an $\tilde{O}(\varepsilon^{-3})$ complexity result for a problem with a nonconvex objective and nonconvex constraints, where the complexity is measured by the number of first-order oracles to yield an $\varepsilon$-KKT solution. Both results are the best known. The same-order complexity results have been achieved by penalty methods. However, two different analysis techniques are used to obtain the results, and more importantly, the penalty methods generally perform significantly worse than iALM in practice. Our improved iALM and analysis close the gap between theory and practice. Numerical experiments on nonconvex problems with convex or nonconvex constraints are provided to demonstrate the effectiveness of our proposed method.
APA
Li, Z., Chen, P., Liu, S., Lu, S. & Xu, Y.. (2021). Rate-improved inexact augmented Lagrangian method for constrained nonconvex optimization . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:2170-2178 Available from https://proceedings.mlr.press/v130/li21d.html.

Related Material