[edit]
Accelerated Parameter-Free Stochastic Optimization
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3257-3324, 2024.
Abstract
We propose a method that achieves near-optimal rates for \emph{smooth} stochastic convex optimization and requires essentially no prior knowledge of problem parameters. This improves on prior work which requires knowing at least the initial distance to optimality $d_0$. Our method, \textsc{U-DoG}, combines \textsc{UniXGrad} (Kavis et al., 2019) and \textsc{DoG} (Ivgi et al., 2023) with novel iterate stabilization techniques. It requires only loose bounds on $d_0$ and the noise magnitude, provides high probability guarantees under sub-Gaussian noise, and is also near-optimal in the non-smooth case. Our experiments show consistent, strong performance on convex problems and mixed results on neural network training.