Critical Parameters for Scalable Distributed Learning with Large Batches and Asynchronous Updates

Sebastian Stich, Amirkeivan Mohtashami, Martin Jaggi
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:4042-4050, 2021.

Abstract

It has been experimentally observed that the efficiency of distributed training with stochastic gradient (SGD) depends decisively on the batch size and—in asynchronous implementations—on the gradient staleness. Especially, it has been observed that the speedup saturates beyond a certain batch size and/or when the delays grow too large. We identify a data-dependent parameter that explains the speedup saturation in both these settings. Our comprehensive theoretical analysis, for strongly convex, convex and non-convex settings, unifies and generalized prior work directions that often focused on only one of these two aspects. In particular, our approach allows us to derive improved speedup results under frequently considered sparsity assumptions. Our insights give rise to theoretically based guidelines on how the learning rates can be adjusted in practice. We show that our results are tight and illustrate key findings in numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-stich21a, title = { Critical Parameters for Scalable Distributed Learning with Large Batches and Asynchronous Updates }, author = {Stich, Sebastian and Mohtashami, Amirkeivan and Jaggi, Martin}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {4042--4050}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/stich21a/stich21a.pdf}, url = {https://proceedings.mlr.press/v130/stich21a.html}, abstract = { It has been experimentally observed that the efficiency of distributed training with stochastic gradient (SGD) depends decisively on the batch size and—in asynchronous implementations—on the gradient staleness. Especially, it has been observed that the speedup saturates beyond a certain batch size and/or when the delays grow too large. We identify a data-dependent parameter that explains the speedup saturation in both these settings. Our comprehensive theoretical analysis, for strongly convex, convex and non-convex settings, unifies and generalized prior work directions that often focused on only one of these two aspects. In particular, our approach allows us to derive improved speedup results under frequently considered sparsity assumptions. Our insights give rise to theoretically based guidelines on how the learning rates can be adjusted in practice. We show that our results are tight and illustrate key findings in numerical experiments. } }
Endnote
%0 Conference Paper %T Critical Parameters for Scalable Distributed Learning with Large Batches and Asynchronous Updates %A Sebastian Stich %A Amirkeivan Mohtashami %A Martin Jaggi %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-stich21a %I PMLR %P 4042--4050 %U https://proceedings.mlr.press/v130/stich21a.html %V 130 %X It has been experimentally observed that the efficiency of distributed training with stochastic gradient (SGD) depends decisively on the batch size and—in asynchronous implementations—on the gradient staleness. Especially, it has been observed that the speedup saturates beyond a certain batch size and/or when the delays grow too large. We identify a data-dependent parameter that explains the speedup saturation in both these settings. Our comprehensive theoretical analysis, for strongly convex, convex and non-convex settings, unifies and generalized prior work directions that often focused on only one of these two aspects. In particular, our approach allows us to derive improved speedup results under frequently considered sparsity assumptions. Our insights give rise to theoretically based guidelines on how the learning rates can be adjusted in practice. We show that our results are tight and illustrate key findings in numerical experiments.
APA
Stich, S., Mohtashami, A. & Jaggi, M.. (2021). Critical Parameters for Scalable Distributed Learning with Large Batches and Asynchronous Updates . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:4042-4050 Available from https://proceedings.mlr.press/v130/stich21a.html.

Related Material