Deterministic Nonsmooth Nonconvex Optimization

Michael Jordan, Guy Kornowski, Tianyi Lin, Ohad Shamir, Manolis Zampetakis
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:4570-4597, 2023.

Abstract

We study the complexity of optimizing nonsmooth nonconvex Lipschitz functions by producing $(\delta,\epsilon)$-Goldstein stationary points. Several recent works have presented randomized algorithms that produce such points using $\widetilde{O}(\delta^{-1}\epsilon^{-3})$ first-order oracle calls, independent of the dimension $d$. It has been an open problem as to whether a similar result can be obtained via a deterministic algorithm. We resolve this open problem, showing that randomization is necessary to obtain a dimension-free rate. In particular, we prove a lower bound of $\Omega(d)$ for any deterministic algorithm. Moreover, we show that unlike smooth or convex optimization, access to function values is required for any deterministic algorithm to halt within any finite time horizon.On the other hand, we prove that if the function is even slightly smooth, then the dimension-free rate of $\widetilde{O}(\delta^{-1}\epsilon^{-3})$ can be obtained by a deterministic algorithm with merely a logarithmic dependence on the smoothness parameter. Motivated by these findings, we turn to study the complexity of deterministically smoothing Lipschitz functions. Though there are well-known efficient black-box randomized smoothings, we start by showing that no such deterministic procedure can smooth functions in a meaningful manner (suitably defined), resolving an open question in the literature. We then bypass this impossibility result for the structured case of ReLU neural networks. To that end, in a practical “white-box” setting in which the optimizer is granted access to the network’s architecture, we propose a simple, dimension-free, deterministic smoothing of ReLU networks that provably preserves $(\delta,\epsilon)$-Goldstein stationary points. Our method applies to a variety of architectures of arbitrary depth, including ResNets and ConvNets.Combined with our algorithm for slightly-smooth functions, this yields the first deterministic, dimension-free algorithm for optimizing ReLU networks, circumventing our lower bound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v195-jordan23a, title = {Deterministic Nonsmooth Nonconvex Optimization}, author = {Jordan, Michael and Kornowski, Guy and Lin, Tianyi and Shamir, Ohad and Zampetakis, Manolis}, booktitle = {Proceedings of Thirty Sixth Conference on Learning Theory}, pages = {4570--4597}, year = {2023}, editor = {Neu, Gergely and Rosasco, Lorenzo}, volume = {195}, series = {Proceedings of Machine Learning Research}, month = {12--15 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v195/jordan23a/jordan23a.pdf}, url = {https://proceedings.mlr.press/v195/jordan23a.html}, abstract = {We study the complexity of optimizing nonsmooth nonconvex Lipschitz functions by producing $(\delta,\epsilon)$-Goldstein stationary points. Several recent works have presented randomized algorithms that produce such points using $\widetilde{O}(\delta^{-1}\epsilon^{-3})$ first-order oracle calls, independent of the dimension $d$. It has been an open problem as to whether a similar result can be obtained via a deterministic algorithm. We resolve this open problem, showing that randomization is necessary to obtain a dimension-free rate. In particular, we prove a lower bound of $\Omega(d)$ for any deterministic algorithm. Moreover, we show that unlike smooth or convex optimization, access to function values is required for any deterministic algorithm to halt within any finite time horizon.On the other hand, we prove that if the function is even slightly smooth, then the dimension-free rate of $\widetilde{O}(\delta^{-1}\epsilon^{-3})$ can be obtained by a deterministic algorithm with merely a logarithmic dependence on the smoothness parameter. Motivated by these findings, we turn to study the complexity of deterministically smoothing Lipschitz functions. Though there are well-known efficient black-box randomized smoothings, we start by showing that no such deterministic procedure can smooth functions in a meaningful manner (suitably defined), resolving an open question in the literature. We then bypass this impossibility result for the structured case of ReLU neural networks. To that end, in a practical “white-box” setting in which the optimizer is granted access to the network’s architecture, we propose a simple, dimension-free, deterministic smoothing of ReLU networks that provably preserves $(\delta,\epsilon)$-Goldstein stationary points. Our method applies to a variety of architectures of arbitrary depth, including ResNets and ConvNets.Combined with our algorithm for slightly-smooth functions, this yields the first deterministic, dimension-free algorithm for optimizing ReLU networks, circumventing our lower bound.} }
Endnote
%0 Conference Paper %T Deterministic Nonsmooth Nonconvex Optimization %A Michael Jordan %A Guy Kornowski %A Tianyi Lin %A Ohad Shamir %A Manolis Zampetakis %B Proceedings of Thirty Sixth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2023 %E Gergely Neu %E Lorenzo Rosasco %F pmlr-v195-jordan23a %I PMLR %P 4570--4597 %U https://proceedings.mlr.press/v195/jordan23a.html %V 195 %X We study the complexity of optimizing nonsmooth nonconvex Lipschitz functions by producing $(\delta,\epsilon)$-Goldstein stationary points. Several recent works have presented randomized algorithms that produce such points using $\widetilde{O}(\delta^{-1}\epsilon^{-3})$ first-order oracle calls, independent of the dimension $d$. It has been an open problem as to whether a similar result can be obtained via a deterministic algorithm. We resolve this open problem, showing that randomization is necessary to obtain a dimension-free rate. In particular, we prove a lower bound of $\Omega(d)$ for any deterministic algorithm. Moreover, we show that unlike smooth or convex optimization, access to function values is required for any deterministic algorithm to halt within any finite time horizon.On the other hand, we prove that if the function is even slightly smooth, then the dimension-free rate of $\widetilde{O}(\delta^{-1}\epsilon^{-3})$ can be obtained by a deterministic algorithm with merely a logarithmic dependence on the smoothness parameter. Motivated by these findings, we turn to study the complexity of deterministically smoothing Lipschitz functions. Though there are well-known efficient black-box randomized smoothings, we start by showing that no such deterministic procedure can smooth functions in a meaningful manner (suitably defined), resolving an open question in the literature. We then bypass this impossibility result for the structured case of ReLU neural networks. To that end, in a practical “white-box” setting in which the optimizer is granted access to the network’s architecture, we propose a simple, dimension-free, deterministic smoothing of ReLU networks that provably preserves $(\delta,\epsilon)$-Goldstein stationary points. Our method applies to a variety of architectures of arbitrary depth, including ResNets and ConvNets.Combined with our algorithm for slightly-smooth functions, this yields the first deterministic, dimension-free algorithm for optimizing ReLU networks, circumventing our lower bound.
APA
Jordan, M., Kornowski, G., Lin, T., Shamir, O. & Zampetakis, M.. (2023). Deterministic Nonsmooth Nonconvex Optimization. Proceedings of Thirty Sixth Conference on Learning Theory, in Proceedings of Machine Learning Research 195:4570-4597 Available from https://proceedings.mlr.press/v195/jordan23a.html.

Related Material