Practical Tradeoffs between Memory, Compute, and Performance in Learned Optimizers

Luke Metz, C. Daniel Freeman, James Harrison, Niru Maheswaranathan, Jascha Sohl-dickstein
Proceedings of The 1st Conference on Lifelong Learning Agents, PMLR 199:142-164, 2022.

Abstract

Optimization plays a costly and crucial role in developing machine learning systems. In learned optimizers, the few hyperparameters of commonly used hand-designed optimizers, e.g. Adam or SGD, are replaced with flexible parametric functions. The parameters of these functions are then optimized so that the resulting learned optimizer minimizes a target loss on a chosen class of models. Learned optimizers can both reduce the number of required training steps and improve the final test loss. However, they can be expensive to train, and once trained can be expensive to use due to computational and memory overhead for the optimizer itself. In this work, we identify and quantify the design features governing the memory, compute, and performance trade-offs for many learned and hand-designed optimizers. We further leverage our analysis to construct a learned optimizer that is both faster and more memory efficient than previous work. Our model and training code are open source.

Cite this Paper


BibTeX
@InProceedings{pmlr-v199-metz22a, title = {Practical Tradeoffs between Memory, Compute, and Performance in Learned Optimizers}, author = {Metz, Luke and Freeman, C. Daniel and Harrison, James and Maheswaranathan, Niru and Sohl-dickstein, Jascha}, booktitle = {Proceedings of The 1st Conference on Lifelong Learning Agents}, pages = {142--164}, year = {2022}, editor = {Chandar, Sarath and Pascanu, Razvan and Precup, Doina}, volume = {199}, series = {Proceedings of Machine Learning Research}, month = {22--24 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v199/metz22a/metz22a.pdf}, url = {https://proceedings.mlr.press/v199/metz22a.html}, abstract = {Optimization plays a costly and crucial role in developing machine learning systems. In learned optimizers, the few hyperparameters of commonly used hand-designed optimizers, e.g. Adam or SGD, are replaced with flexible parametric functions. The parameters of these functions are then optimized so that the resulting learned optimizer minimizes a target loss on a chosen class of models. Learned optimizers can both reduce the number of required training steps and improve the final test loss. However, they can be expensive to train, and once trained can be expensive to use due to computational and memory overhead for the optimizer itself. In this work, we identify and quantify the design features governing the memory, compute, and performance trade-offs for many learned and hand-designed optimizers. We further leverage our analysis to construct a learned optimizer that is both faster and more memory efficient than previous work. Our model and training code are open source.} }
Endnote
%0 Conference Paper %T Practical Tradeoffs between Memory, Compute, and Performance in Learned Optimizers %A Luke Metz %A C. Daniel Freeman %A James Harrison %A Niru Maheswaranathan %A Jascha Sohl-dickstein %B Proceedings of The 1st Conference on Lifelong Learning Agents %C Proceedings of Machine Learning Research %D 2022 %E Sarath Chandar %E Razvan Pascanu %E Doina Precup %F pmlr-v199-metz22a %I PMLR %P 142--164 %U https://proceedings.mlr.press/v199/metz22a.html %V 199 %X Optimization plays a costly and crucial role in developing machine learning systems. In learned optimizers, the few hyperparameters of commonly used hand-designed optimizers, e.g. Adam or SGD, are replaced with flexible parametric functions. The parameters of these functions are then optimized so that the resulting learned optimizer minimizes a target loss on a chosen class of models. Learned optimizers can both reduce the number of required training steps and improve the final test loss. However, they can be expensive to train, and once trained can be expensive to use due to computational and memory overhead for the optimizer itself. In this work, we identify and quantify the design features governing the memory, compute, and performance trade-offs for many learned and hand-designed optimizers. We further leverage our analysis to construct a learned optimizer that is both faster and more memory efficient than previous work. Our model and training code are open source.
APA
Metz, L., Freeman, C.D., Harrison, J., Maheswaranathan, N. & Sohl-dickstein, J.. (2022). Practical Tradeoffs between Memory, Compute, and Performance in Learned Optimizers. Proceedings of The 1st Conference on Lifelong Learning Agents, in Proceedings of Machine Learning Research 199:142-164 Available from https://proceedings.mlr.press/v199/metz22a.html.

Related Material