[edit]
A Lower Bound for the Optimization of Finite Sums
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:78-86, 2015.
Abstract
This paper presents a lower bound for optimizing a finite sum of n functions, where each function is L-smooth and the sum is μ-strongly convex. We show that no algorithm can reach an error εin minimizing all functions from this class in fewer than Ω(n + \sqrtn(κ-1)\log(1/ε)) iterations, where κ=L/μis a surrogate condition number. We then compare this lower bound to upper bounds for recently developed methods specializing to this setting. When the functions involved in this sum are not arbitrary, but based on i.i.d. random data, then we further contrast these complexity results with those for optimal first-order methods to directly optimize the sum. The conclusion we draw is that a lot of caution is necessary for an accurate comparison, and identify machine learning scenarios where the new methods help computationally.