[edit]
Open Problem: Polynomial linearly-convergent method for g-convex optimization?
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5950-5956, 2023.
Abstract
Let f:M→R be a Lipschitz and geodesically convex function defined on a d-dimensional Riemannian manifold M. Does there exist a first-order deterministic algorithm which (a) uses at most O(poly(d)log(ϵ−1)) subgradient queries to find a point with target accuracy ϵ, and (b) requires only O(poly(d)) arithmetic operations per query? In convex optimization, the classical ellipsoid method achieves this. After detailing related work, we provide an ellipsoid-like algorithm with query complexity O(d2log2(ϵ−1)) and per-query complexity O(d2) for the limited case where M has constant curvature (hemisphere or hyperbolic space). We then detail possible approaches and corresponding obstacles for designing an ellipsoid-like method for general Riemannian manifolds.