Nearoptimal method for highly smooth convex optimization
[edit]
Proceedings of the ThirtySecond Conference on Learning Theory, PMLR 99:492507, 2019.
Abstract
We propose a nearoptimal method for highly smooth convex optimization. More precisely, in the oracle model where one obtains the $p^{th}$ order Taylor expansion of a function at the query point, we propose a method with rate of convergence $\tilde{O}(1/k^{\frac{ 3p +1}{2}})$ after $k$ queries to the oracle for any convex function whose $p^{th}$ order derivative is Lipschitz.
Related Material


