Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization

[edit]

Elad Hazan, Satyen Kale ;
Proceedings of the 24th Annual Conference on Learning Theory, PMLR 19:421-436, 2011.

Abstract

We give a novel algorithm for stochastic strongly-convex optimization in thegradient oracle model which returns an O(\frac1T)-approximate solutionafter T gradient updates. This rate of convergence is optimal in the gradientoracle model. This improves upon the previously known best rate ofO(\frac\log(T)T), which was obtained by applying an onlinestrongly-convex optimization algorithm with regret O(\log(T)) to the batchsetting.We complement this result by proving that any algorithm has expected regret ofΩ(\log(T)) in the online stochastic strongly-convex optimizationsetting. This lower bound holds even in the full-information setting whichreveals more information to the algorithm than just gradients. This shows thatany online-to-batch conversion is inherently suboptimal for stochasticstrongly-convex optimization. This is the first formal evidence that onlineconvex optimization is strictly more difficult than batch stochastic convexoptimization.

Related Material