AdaScale SGD: A User-Friendly Algorithm for Distributed Training

Tyler Johnson, Pulkit Agrawal, Haijie Gu, Carlos Guestrin
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4911-4920, 2020.

Abstract

When using large-batch training to speed up stochastic gradient descent, learning rates must adapt to new batch sizes in order to maximize speed-ups and preserve model quality. Re-tuning learning rates is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, an algorithm that reliably adapts learning rates to large-batch training. By continually adapting to the gradient’s variance, AdaScale automatically achieves speed-ups for a wide range of batch sizes. We formally describe this quality with AdaScale’s convergence bound, which maintains final objective values, even as batch sizes grow large and the number of iterations decreases. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular “linear learning rate scaling” rules. This includes large-batch training with no model degradation for machine translation, image classification, object detection, and speech recognition tasks. AdaScale’s qualitative behavior is similar to that of "warm-up" heuristics, but unlike warm-up, this behavior emerges naturally from a principled mechanism. The algorithm introduces negligible computational overhead and no new hyperparameters, making AdaScale an attractive choice for large-scale training in practice.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-johnson20a, title = {{A}da{S}cale {SGD}: A User-Friendly Algorithm for Distributed Training}, author = {Johnson, Tyler and Agrawal, Pulkit and Gu, Haijie and Guestrin, Carlos}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4911--4920}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/johnson20a/johnson20a.pdf}, url = {https://proceedings.mlr.press/v119/johnson20a.html}, abstract = {When using large-batch training to speed up stochastic gradient descent, learning rates must adapt to new batch sizes in order to maximize speed-ups and preserve model quality. Re-tuning learning rates is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, an algorithm that reliably adapts learning rates to large-batch training. By continually adapting to the gradient’s variance, AdaScale automatically achieves speed-ups for a wide range of batch sizes. We formally describe this quality with AdaScale’s convergence bound, which maintains final objective values, even as batch sizes grow large and the number of iterations decreases. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular “linear learning rate scaling” rules. This includes large-batch training with no model degradation for machine translation, image classification, object detection, and speech recognition tasks. AdaScale’s qualitative behavior is similar to that of "warm-up" heuristics, but unlike warm-up, this behavior emerges naturally from a principled mechanism. The algorithm introduces negligible computational overhead and no new hyperparameters, making AdaScale an attractive choice for large-scale training in practice.} }
Endnote
%0 Conference Paper %T AdaScale SGD: A User-Friendly Algorithm for Distributed Training %A Tyler Johnson %A Pulkit Agrawal %A Haijie Gu %A Carlos Guestrin %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-johnson20a %I PMLR %P 4911--4920 %U https://proceedings.mlr.press/v119/johnson20a.html %V 119 %X When using large-batch training to speed up stochastic gradient descent, learning rates must adapt to new batch sizes in order to maximize speed-ups and preserve model quality. Re-tuning learning rates is resource intensive, while fixed scaling rules often degrade model quality. We propose AdaScale SGD, an algorithm that reliably adapts learning rates to large-batch training. By continually adapting to the gradient’s variance, AdaScale automatically achieves speed-ups for a wide range of batch sizes. We formally describe this quality with AdaScale’s convergence bound, which maintains final objective values, even as batch sizes grow large and the number of iterations decreases. In empirical comparisons, AdaScale trains well beyond the batch size limits of popular “linear learning rate scaling” rules. This includes large-batch training with no model degradation for machine translation, image classification, object detection, and speech recognition tasks. AdaScale’s qualitative behavior is similar to that of "warm-up" heuristics, but unlike warm-up, this behavior emerges naturally from a principled mechanism. The algorithm introduces negligible computational overhead and no new hyperparameters, making AdaScale an attractive choice for large-scale training in practice.
APA
Johnson, T., Agrawal, P., Gu, H. & Guestrin, C.. (2020). AdaScale SGD: A User-Friendly Algorithm for Distributed Training. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4911-4920 Available from https://proceedings.mlr.press/v119/johnson20a.html.

Related Material