A Lower Bound for the Optimization of Finite Sums

Alekh Agarwal, Leon Bottou
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:78-86, 2015.

Abstract

This paper presents a lower bound for optimizing a finite sum of n functions, where each function is L-smooth and the sum is μ-strongly convex. We show that no algorithm can reach an error εin minimizing all functions from this class in fewer than Ω(n + \sqrtn(κ-1)\log(1/ε)) iterations, where κ=L/μis a surrogate condition number. We then compare this lower bound to upper bounds for recently developed methods specializing to this setting. When the functions involved in this sum are not arbitrary, but based on i.i.d. random data, then we further contrast these complexity results with those for optimal first-order methods to directly optimize the sum. The conclusion we draw is that a lot of caution is necessary for an accurate comparison, and identify machine learning scenarios where the new methods help computationally.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-agarwal15, title = {A Lower Bound for the Optimization of Finite Sums}, author = {Agarwal, Alekh and Bottou, Leon}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {78--86}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/agarwal15.pdf}, url = {https://proceedings.mlr.press/v37/agarwal15.html}, abstract = {This paper presents a lower bound for optimizing a finite sum of n functions, where each function is L-smooth and the sum is μ-strongly convex. We show that no algorithm can reach an error εin minimizing all functions from this class in fewer than Ω(n + \sqrtn(κ-1)\log(1/ε)) iterations, where κ=L/μis a surrogate condition number. We then compare this lower bound to upper bounds for recently developed methods specializing to this setting. When the functions involved in this sum are not arbitrary, but based on i.i.d. random data, then we further contrast these complexity results with those for optimal first-order methods to directly optimize the sum. The conclusion we draw is that a lot of caution is necessary for an accurate comparison, and identify machine learning scenarios where the new methods help computationally.} }
Endnote
%0 Conference Paper %T A Lower Bound for the Optimization of Finite Sums %A Alekh Agarwal %A Leon Bottou %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-agarwal15 %I PMLR %P 78--86 %U https://proceedings.mlr.press/v37/agarwal15.html %V 37 %X This paper presents a lower bound for optimizing a finite sum of n functions, where each function is L-smooth and the sum is μ-strongly convex. We show that no algorithm can reach an error εin minimizing all functions from this class in fewer than Ω(n + \sqrtn(κ-1)\log(1/ε)) iterations, where κ=L/μis a surrogate condition number. We then compare this lower bound to upper bounds for recently developed methods specializing to this setting. When the functions involved in this sum are not arbitrary, but based on i.i.d. random data, then we further contrast these complexity results with those for optimal first-order methods to directly optimize the sum. The conclusion we draw is that a lot of caution is necessary for an accurate comparison, and identify machine learning scenarios where the new methods help computationally.
RIS
TY - CPAPER TI - A Lower Bound for the Optimization of Finite Sums AU - Alekh Agarwal AU - Leon Bottou BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-agarwal15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 78 EP - 86 L1 - http://proceedings.mlr.press/v37/agarwal15.pdf UR - https://proceedings.mlr.press/v37/agarwal15.html AB - This paper presents a lower bound for optimizing a finite sum of n functions, where each function is L-smooth and the sum is μ-strongly convex. We show that no algorithm can reach an error εin minimizing all functions from this class in fewer than Ω(n + \sqrtn(κ-1)\log(1/ε)) iterations, where κ=L/μis a surrogate condition number. We then compare this lower bound to upper bounds for recently developed methods specializing to this setting. When the functions involved in this sum are not arbitrary, but based on i.i.d. random data, then we further contrast these complexity results with those for optimal first-order methods to directly optimize the sum. The conclusion we draw is that a lot of caution is necessary for an accurate comparison, and identify machine learning scenarios where the new methods help computationally. ER -
APA
Agarwal, A. & Bottou, L.. (2015). A Lower Bound for the Optimization of Finite Sums. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:78-86 Available from https://proceedings.mlr.press/v37/agarwal15.html.

Related Material