[edit]
Global Riemannian Acceleration in Hyperbolic and Spherical Spaces
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:768-826, 2022.
Abstract
We further research on the accelerated optimization phenomenon on Riemannian manifolds by introducing accelerated global first-order methods for the optimization of L-smooth and geodesically convex (g-convex) or μ-strongly g-convex functions defined on the hyperbolic space or a subset of the sphere. For a manifold other than the Euclidean space, these are the first methods to \emph{globally} achieve the same rates as accelerated gradient descent in the Euclidean space with respect to L and ϵ (and μ if it applies), up to log factors. Previous results with these accelerated rates only worked, given strong g-convexity, in a small neighborhood (initial distance R to a minimizer being R=O((μ/L)3/4)). Our rates have a polynomial factor on 1/cos(R) (spherical case) or cosh(R) (hyperbolic case). Thus, we completely match the Euclidean case for a constant initial distance, and for larger R we incur greater constants due to the geometry. As a proxy for our solution, we solve a constrained non-convex Euclidean problem, under a condition between convexity and \textit{quasar-convexity}, of independent interest. Additionally, for any Riemannian manifold of bounded sectional curvature, we provide reductions from optimization methods for smooth and g-convex functions to methods for smooth and strongly g-convex functions and vice versa.