Acceleration via Fractal Learning Rate Schedules

Naman Agarwal, Surbhi Goel, Cyril Zhang
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:87-99, 2021.

Abstract

In practical applications of iterative first-order optimization, the learning rate schedule remains notoriously difficult to understand and expensive to tune. We demonstrate the presence of these subtleties even in the innocuous case when the objective is a convex quadratic. We reinterpret an iterative algorithm from the numerical analysis literature as what we call the Chebyshev learning rate schedule for accelerating vanilla gradient descent, and show that the problem of mitigating instability leads to a fractal ordering of step sizes. We provide some experiments to challenge conventional beliefs about stable learning rates in deep learning: the fractal schedule enables training to converge with locally unstable updates which make negative progress on the objective.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-agarwal21a, title = {Acceleration via Fractal Learning Rate Schedules}, author = {Agarwal, Naman and Goel, Surbhi and Zhang, Cyril}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {87--99}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/agarwal21a/agarwal21a.pdf}, url = {https://proceedings.mlr.press/v139/agarwal21a.html}, abstract = {In practical applications of iterative first-order optimization, the learning rate schedule remains notoriously difficult to understand and expensive to tune. We demonstrate the presence of these subtleties even in the innocuous case when the objective is a convex quadratic. We reinterpret an iterative algorithm from the numerical analysis literature as what we call the Chebyshev learning rate schedule for accelerating vanilla gradient descent, and show that the problem of mitigating instability leads to a fractal ordering of step sizes. We provide some experiments to challenge conventional beliefs about stable learning rates in deep learning: the fractal schedule enables training to converge with locally unstable updates which make negative progress on the objective.} }
Endnote
%0 Conference Paper %T Acceleration via Fractal Learning Rate Schedules %A Naman Agarwal %A Surbhi Goel %A Cyril Zhang %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-agarwal21a %I PMLR %P 87--99 %U https://proceedings.mlr.press/v139/agarwal21a.html %V 139 %X In practical applications of iterative first-order optimization, the learning rate schedule remains notoriously difficult to understand and expensive to tune. We demonstrate the presence of these subtleties even in the innocuous case when the objective is a convex quadratic. We reinterpret an iterative algorithm from the numerical analysis literature as what we call the Chebyshev learning rate schedule for accelerating vanilla gradient descent, and show that the problem of mitigating instability leads to a fractal ordering of step sizes. We provide some experiments to challenge conventional beliefs about stable learning rates in deep learning: the fractal schedule enables training to converge with locally unstable updates which make negative progress on the objective.
APA
Agarwal, N., Goel, S. & Zhang, C.. (2021). Acceleration via Fractal Learning Rate Schedules. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:87-99 Available from https://proceedings.mlr.press/v139/agarwal21a.html.

Related Material