How to Trap a Gradient Flow

Sébastien Bubeck, Dan Mikulincer
Proceedings of Thirty Third Conference on Learning Theory, PMLR 125:940-960, 2020.

Abstract

We consider the problem of finding an $\varepsilon$-approximate stationary point of a smooth function on a compact domain of $\R^d$. In contrast with dimension-free approaches such as gradient descent, we focus here on the case where $d$ is finite, and potentially small. This viewpoint was explored in 1993 by Vavasis, who proposed an algorithm which, for {\em any fixed finite dimension $d$}, improves upon the $O(1/\varepsilon^2)$ oracle complexity of gradient descent. For example for $d=2$, Vavasis’ approach obtains the complexity $O(1/\varepsilon)$. Moreover for $d=2$ he also proved a lower bound of $\Omega(1/\sqrt{\varepsilon})$ for deterministic algorithms (we extend this result to randomized algorithms). Our main contribution is an algorithm, which we call {\em gradient flow trapping} (GFT), and the analysis of its oracle complexity. In dimension $d=2$, GFT closes the gap with Vavasis’ lower bound (up to a logarithmic factor), as we show that it has complexity $O\left(\sqrt{\frac{\log(1/\varepsilon)}{\varepsilon}}\right)$. In dimension $d=3$, we show a complexity of $O\left(\frac{\log(1/\varepsilon)}{\varepsilon}\right)$, improving upon Vavasis’ $O\left(1 / \varepsilon^{1.2} \right)$. In higher dimensions, GFT has the remarkable property of being a {\em logarithmic parallel depth} strategy, in stark contrast with the polynomial depth of gradient descent or Vavasis’ algorithm. In this higher dimensional regime, the total work of GFT improves quadratically upon the only other known polylogarithmic depth strategy for this problem, namely naive grid search.

Cite this Paper


BibTeX
@InProceedings{pmlr-v125-bubeck20b, title = {How to Trap a Gradient Flow}, author = {Bubeck, S\'ebastien and Mikulincer, Dan}, booktitle = {Proceedings of Thirty Third Conference on Learning Theory}, pages = {940--960}, year = {2020}, editor = {Abernethy, Jacob and Agarwal, Shivani}, volume = {125}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v125/bubeck20b/bubeck20b.pdf}, url = {https://proceedings.mlr.press/v125/bubeck20b.html}, abstract = { We consider the problem of finding an $\varepsilon$-approximate stationary point of a smooth function on a compact domain of $\R^d$. In contrast with dimension-free approaches such as gradient descent, we focus here on the case where $d$ is finite, and potentially small. This viewpoint was explored in 1993 by Vavasis, who proposed an algorithm which, for {\em any fixed finite dimension $d$}, improves upon the $O(1/\varepsilon^2)$ oracle complexity of gradient descent. For example for $d=2$, Vavasis’ approach obtains the complexity $O(1/\varepsilon)$. Moreover for $d=2$ he also proved a lower bound of $\Omega(1/\sqrt{\varepsilon})$ for deterministic algorithms (we extend this result to randomized algorithms). Our main contribution is an algorithm, which we call {\em gradient flow trapping} (GFT), and the analysis of its oracle complexity. In dimension $d=2$, GFT closes the gap with Vavasis’ lower bound (up to a logarithmic factor), as we show that it has complexity $O\left(\sqrt{\frac{\log(1/\varepsilon)}{\varepsilon}}\right)$. In dimension $d=3$, we show a complexity of $O\left(\frac{\log(1/\varepsilon)}{\varepsilon}\right)$, improving upon Vavasis’ $O\left(1 / \varepsilon^{1.2} \right)$. In higher dimensions, GFT has the remarkable property of being a {\em logarithmic parallel depth} strategy, in stark contrast with the polynomial depth of gradient descent or Vavasis’ algorithm. In this higher dimensional regime, the total work of GFT improves quadratically upon the only other known polylogarithmic depth strategy for this problem, namely naive grid search.} }
Endnote
%0 Conference Paper %T How to Trap a Gradient Flow %A Sébastien Bubeck %A Dan Mikulincer %B Proceedings of Thirty Third Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2020 %E Jacob Abernethy %E Shivani Agarwal %F pmlr-v125-bubeck20b %I PMLR %P 940--960 %U https://proceedings.mlr.press/v125/bubeck20b.html %V 125 %X We consider the problem of finding an $\varepsilon$-approximate stationary point of a smooth function on a compact domain of $\R^d$. In contrast with dimension-free approaches such as gradient descent, we focus here on the case where $d$ is finite, and potentially small. This viewpoint was explored in 1993 by Vavasis, who proposed an algorithm which, for {\em any fixed finite dimension $d$}, improves upon the $O(1/\varepsilon^2)$ oracle complexity of gradient descent. For example for $d=2$, Vavasis’ approach obtains the complexity $O(1/\varepsilon)$. Moreover for $d=2$ he also proved a lower bound of $\Omega(1/\sqrt{\varepsilon})$ for deterministic algorithms (we extend this result to randomized algorithms). Our main contribution is an algorithm, which we call {\em gradient flow trapping} (GFT), and the analysis of its oracle complexity. In dimension $d=2$, GFT closes the gap with Vavasis’ lower bound (up to a logarithmic factor), as we show that it has complexity $O\left(\sqrt{\frac{\log(1/\varepsilon)}{\varepsilon}}\right)$. In dimension $d=3$, we show a complexity of $O\left(\frac{\log(1/\varepsilon)}{\varepsilon}\right)$, improving upon Vavasis’ $O\left(1 / \varepsilon^{1.2} \right)$. In higher dimensions, GFT has the remarkable property of being a {\em logarithmic parallel depth} strategy, in stark contrast with the polynomial depth of gradient descent or Vavasis’ algorithm. In this higher dimensional regime, the total work of GFT improves quadratically upon the only other known polylogarithmic depth strategy for this problem, namely naive grid search.
APA
Bubeck, S. & Mikulincer, D.. (2020). How to Trap a Gradient Flow. Proceedings of Thirty Third Conference on Learning Theory, in Proceedings of Machine Learning Research 125:940-960 Available from https://proceedings.mlr.press/v125/bubeck20b.html.

Related Material