LocoProp: Enhancing BackProp via Local Loss Optimization

Ehsan Amid, Rohan Anil, Manfred Warmuth
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:9626-9642, 2022.

Abstract

Second-order methods have shown state-of-the-art performance for optimizing deep neural networks. Nonetheless, their large memory requirement and high computational complexity, compared to first-order methods, hinder their versatility in a typical low-budget setup. This paper introduces a general framework of layerwise loss construction for multilayer neural networks that achieves a performance closer to second-order methods while utilizing first-order optimizers only. Our methodology lies upon a three-component loss, target, and regularizer combination, for which altering each component results in a new update rule. We provide examples using squared loss and layerwise Bregman divergences induced by the convex integral functions of various transfer functions. Our experiments on benchmark models and datasets validate the efficacy of our new approach, reducing the gap between first-order and second-order optimizers.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-amid22a, title = { LocoProp: Enhancing BackProp via Local Loss Optimization }, author = {Amid, Ehsan and Anil, Rohan and Warmuth, Manfred}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {9626--9642}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/amid22a/amid22a.pdf}, url = {https://proceedings.mlr.press/v151/amid22a.html}, abstract = { Second-order methods have shown state-of-the-art performance for optimizing deep neural networks. Nonetheless, their large memory requirement and high computational complexity, compared to first-order methods, hinder their versatility in a typical low-budget setup. This paper introduces a general framework of layerwise loss construction for multilayer neural networks that achieves a performance closer to second-order methods while utilizing first-order optimizers only. Our methodology lies upon a three-component loss, target, and regularizer combination, for which altering each component results in a new update rule. We provide examples using squared loss and layerwise Bregman divergences induced by the convex integral functions of various transfer functions. Our experiments on benchmark models and datasets validate the efficacy of our new approach, reducing the gap between first-order and second-order optimizers. } }
Endnote
%0 Conference Paper %T LocoProp: Enhancing BackProp via Local Loss Optimization %A Ehsan Amid %A Rohan Anil %A Manfred Warmuth %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-amid22a %I PMLR %P 9626--9642 %U https://proceedings.mlr.press/v151/amid22a.html %V 151 %X Second-order methods have shown state-of-the-art performance for optimizing deep neural networks. Nonetheless, their large memory requirement and high computational complexity, compared to first-order methods, hinder their versatility in a typical low-budget setup. This paper introduces a general framework of layerwise loss construction for multilayer neural networks that achieves a performance closer to second-order methods while utilizing first-order optimizers only. Our methodology lies upon a three-component loss, target, and regularizer combination, for which altering each component results in a new update rule. We provide examples using squared loss and layerwise Bregman divergences induced by the convex integral functions of various transfer functions. Our experiments on benchmark models and datasets validate the efficacy of our new approach, reducing the gap between first-order and second-order optimizers.
APA
Amid, E., Anil, R. & Warmuth, M.. (2022). LocoProp: Enhancing BackProp via Local Loss Optimization . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:9626-9642 Available from https://proceedings.mlr.press/v151/amid22a.html.

Related Material