DoG is SGD’s Best Friend: A Parameter-Free Dynamic Step Size Schedule

Maor Ivgi, Oliver Hinder, Yair Carmon
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:14465-14499, 2023.

Abstract

We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no “learning rate” parameter. Theoretically, we show that, for stochastic convex optimization, a slight variation of the DoG formula enjoys strong, high-probability parameter-free convergence guarantees and iterate movement bounds. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG’s performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam. A PyTorch implementation of our algorithms is available at https://github.com/formll/dog.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ivgi23a, title = {{D}o{G} is {SGD}’s Best Friend: A Parameter-Free Dynamic Step Size Schedule}, author = {Ivgi, Maor and Hinder, Oliver and Carmon, Yair}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {14465--14499}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ivgi23a/ivgi23a.pdf}, url = {https://proceedings.mlr.press/v202/ivgi23a.html}, abstract = {We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no “learning rate” parameter. Theoretically, we show that, for stochastic convex optimization, a slight variation of the DoG formula enjoys strong, high-probability parameter-free convergence guarantees and iterate movement bounds. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG’s performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam. A PyTorch implementation of our algorithms is available at https://github.com/formll/dog.} }
Endnote
%0 Conference Paper %T DoG is SGD’s Best Friend: A Parameter-Free Dynamic Step Size Schedule %A Maor Ivgi %A Oliver Hinder %A Yair Carmon %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ivgi23a %I PMLR %P 14465--14499 %U https://proceedings.mlr.press/v202/ivgi23a.html %V 202 %X We propose a tuning-free dynamic SGD step size formula, which we call Distance over Gradients (DoG). The DoG step sizes depend on simple empirical quantities (distance from the initial point and norms of gradients) and have no “learning rate” parameter. Theoretically, we show that, for stochastic convex optimization, a slight variation of the DoG formula enjoys strong, high-probability parameter-free convergence guarantees and iterate movement bounds. Empirically, we consider a broad range of vision and language transfer learning tasks, and show that DoG’s performance is close to that of SGD with tuned learning rate. We also propose a per-layer variant of DoG that generally outperforms tuned SGD, approaching the performance of tuned Adam. A PyTorch implementation of our algorithms is available at https://github.com/formll/dog.
APA
Ivgi, M., Hinder, O. & Carmon, Y.. (2023). DoG is SGD’s Best Friend: A Parameter-Free Dynamic Step Size Schedule. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:14465-14499 Available from https://proceedings.mlr.press/v202/ivgi23a.html.

Related Material