[edit]
Near-optimal method for highly smooth convex optimization
Proceedings of the Thirty-Second Conference on Learning Theory, PMLR 99:492-507, 2019.
Abstract
We propose a near-optimal method for highly smooth convex optimization. More precisely, in the oracle model where one obtains the pth order Taylor expansion of a function at the query point, we propose a method with rate of convergence ˜O(1/k3p+12) after k queries to the oracle for any convex function whose pth order derivative is Lipschitz.