Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients

Lukas Balles, Philipp Hennig
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:404-413, 2018.

Abstract

The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn’t. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner’s toolbox for problems where ADAM fails.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-balles18a, title = {Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients}, author = {Balles, Lukas and Hennig, Philipp}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {404--413}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/balles18a/balles18a.pdf}, url = {https://proceedings.mlr.press/v80/balles18a.html}, abstract = {The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn’t. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner’s toolbox for problems where ADAM fails.} }
Endnote
%0 Conference Paper %T Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients %A Lukas Balles %A Philipp Hennig %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-balles18a %I PMLR %P 404--413 %U https://proceedings.mlr.press/v80/balles18a.html %V 80 %X The ADAM optimizer is exceedingly popular in the deep learning community. Often it works very well, sometimes it doesn’t. Why? We interpret ADAM as a combination of two aspects: for each weight, the update direction is determined by the sign of stochastic gradients, whereas the update magnitude is determined by an estimate of their relative variance. We disentangle these two aspects and analyze them in isolation, gaining insight into the mechanisms underlying ADAM. This analysis also extends recent results on adverse effects of ADAM on generalization, isolating the sign aspect as the problematic one. Transferring the variance adaptation to SGD gives rise to a novel method, completing the practitioner’s toolbox for problems where ADAM fails.
APA
Balles, L. & Hennig, P.. (2018). Dissecting Adam: The Sign, Magnitude and Variance of Stochastic Gradients. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:404-413 Available from https://proceedings.mlr.press/v80/balles18a.html.

Related Material