The Computational Complexity of Finding Stationary Points in Non-Convex Optimization

Alexandros Hollender, Emmanouil Zampetakis
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5571-5572, 2023.

Abstract

Finding approximate stationary points, i.e., points where the gradient is approximately zero, of non-convex but smooth objective functions $f$ over unrestricted $d$-dimensional domains is one of the most fundamental problems in classical non-convex optimization. Nevertheless, the computational and query complexity of this problem are still not well understood when the dimension $d$ of the problem is independent of the approximation error. In this paper, we show the following computational and query complexity results:1.The problem of finding approximate stationary points over unrestricted domains is PLS-complete.2. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-approximate stationary points that requires at most $O(1/\varepsilon)$ value queries to the objective function.3. We show that any algorithm needs at least $\Omega(1/\varepsilon)$ queries to the objective function and/or its gradient to find $\varepsilon$-approximate stationary points when $d=2$. Combined with the above, this characterizes the query complexity of this problem to be $\Theta(1/\varepsilon)$.4. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-KKT points in constrained optimization problems that requires at most $O(1/\sqrt{\varepsilon})$ value queries to the objective function. This closes the gap between the works of Bubeck and Mikulincer (2020) and Vavasis (1993) and characterizes the query complexity of this problem to be $\Theta(1/\sqrt{\varepsilon})$.5. We show that finding approximate KKT points in constrained optimization is reducible to finding approximate stationary points in unconstrained optimization but the converse is impossible.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-hollender23a, title = {The Computational Complexity of Finding Stationary Points in Non-Convex Optimization}, author = {Hollender, Alexandros and Zampetakis, Emmanouil}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {5571--5572}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/hollender23a/hollender23a.pdf}, url = {https://proceedings.mlr.press/v195/hollender23a.html}, abstract = {Finding approximate stationary points, i.e., points where the gradient is approximately zero, of non-convex but smooth objective functions $f$ over unrestricted $d$-dimensional domains is one of the most fundamental problems in classical non-convex optimization. Nevertheless, the computational and query complexity of this problem are still not well understood when the dimension $d$ of the problem is independent of the approximation error. In this paper, we show the following computational and query complexity results:1.The problem of finding approximate stationary points over unrestricted domains is PLS-complete.2. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-approximate stationary points that requires at most $O(1/\varepsilon)$ value queries to the objective function.3. We show that any algorithm needs at least $\Omega(1/\varepsilon)$ queries to the objective function and/or its gradient to find $\varepsilon$-approximate stationary points when $d=2$. Combined with the above, this characterizes the query complexity of this problem to be $\Theta(1/\varepsilon)$.4. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-KKT points in constrained optimization problems that requires at most $O(1/\sqrt{\varepsilon})$ value queries to the objective function. This closes the gap between the works of Bubeck and Mikulincer (2020) and Vavasis (1993) and characterizes the query complexity of this problem to be $\Theta(1/\sqrt{\varepsilon})$.5. We show that finding approximate KKT points in constrained optimization is reducible to finding approximate stationary points in unconstrained optimization but the converse is impossible.} }
Endnote
%0 Conference Paper %T The Computational Complexity of Finding Stationary Points in Non-Convex Optimization %A Alexandros Hollender %A Emmanouil Zampetakis %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-hollender23a %I PMLR %P 5571--5572 %U https://proceedings.mlr.press/v195/hollender23a.html %V 195 %X Finding approximate stationary points, i.e., points where the gradient is approximately zero, of non-convex but smooth objective functions $f$ over unrestricted $d$-dimensional domains is one of the most fundamental problems in classical non-convex optimization. Nevertheless, the computational and query complexity of this problem are still not well understood when the dimension $d$ of the problem is independent of the approximation error. In this paper, we show the following computational and query complexity results:1.The problem of finding approximate stationary points over unrestricted domains is PLS-complete.2. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-approximate stationary points that requires at most $O(1/\varepsilon)$ value queries to the objective function.3. We show that any algorithm needs at least $\Omega(1/\varepsilon)$ queries to the objective function and/or its gradient to find $\varepsilon$-approximate stationary points when $d=2$. Combined with the above, this characterizes the query complexity of this problem to be $\Theta(1/\varepsilon)$.4. For $d = 2$, we provide a zero-order algorithm for finding $\varepsilon$-KKT points in constrained optimization problems that requires at most $O(1/\sqrt{\varepsilon})$ value queries to the objective function. This closes the gap between the works of Bubeck and Mikulincer (2020) and Vavasis (1993) and characterizes the query complexity of this problem to be $\Theta(1/\sqrt{\varepsilon})$.5. We show that finding approximate KKT points in constrained optimization is reducible to finding approximate stationary points in unconstrained optimization but the converse is impossible.
APA
Hollender, A. & Zampetakis, E.. (2023). The Computational Complexity of Finding Stationary Points in Non-Convex Optimization. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:5571-5572 Available from https://proceedings.mlr.press/v195/hollender23a.html.

Related Material