SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient

Max Ryabinin, Tim Dettmers, Michael Diskin, Alexander Borzunov
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:29416-29440, 2023.

Abstract

Many deep learning applications benefit from using large models with billions of parameters. Training these models is notoriously expensive due to the need for specialized HPC clusters. In this work, we consider alternative setups for training large models: using cheap “preemptible” instances or pooling existing resources from multiple regions. We analyze the performance of existing model-parallel algorithms in these conditions and find configurations where training larger models becomes less communication-intensive. Based on these findings, we propose SWARM Parallelism (Stochastically Wired Adaptively Rebalanced Model Parallelism), a model-parallel training algorithm designed for poorly connected, heterogeneous and unreliable devices. SWARM creates temporary randomized pipelines between nodes that are rebalanced in case of failure. We empirically validate our findings and compare SWARM Parallelism with existing large-scale training approaches. Finally, we combine our insights with compression strategies to train a large Transformer language model with 1B shared parameters ($\approx$13B before sharing) on preemptible T4 GPUs with less than 200 Mb/s network.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-ryabinin23a, title = {{SWARM} Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient}, author = {Ryabinin, Max and Dettmers, Tim and Diskin, Michael and Borzunov, Alexander}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {29416--29440}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/ryabinin23a/ryabinin23a.pdf}, url = {https://proceedings.mlr.press/v202/ryabinin23a.html}, abstract = {Many deep learning applications benefit from using large models with billions of parameters. Training these models is notoriously expensive due to the need for specialized HPC clusters. In this work, we consider alternative setups for training large models: using cheap “preemptible” instances or pooling existing resources from multiple regions. We analyze the performance of existing model-parallel algorithms in these conditions and find configurations where training larger models becomes less communication-intensive. Based on these findings, we propose SWARM Parallelism (Stochastically Wired Adaptively Rebalanced Model Parallelism), a model-parallel training algorithm designed for poorly connected, heterogeneous and unreliable devices. SWARM creates temporary randomized pipelines between nodes that are rebalanced in case of failure. We empirically validate our findings and compare SWARM Parallelism with existing large-scale training approaches. Finally, we combine our insights with compression strategies to train a large Transformer language model with 1B shared parameters ($\approx$13B before sharing) on preemptible T4 GPUs with less than 200 Mb/s network.} }
Endnote
%0 Conference Paper %T SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient %A Max Ryabinin %A Tim Dettmers %A Michael Diskin %A Alexander Borzunov %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-ryabinin23a %I PMLR %P 29416--29440 %U https://proceedings.mlr.press/v202/ryabinin23a.html %V 202 %X Many deep learning applications benefit from using large models with billions of parameters. Training these models is notoriously expensive due to the need for specialized HPC clusters. In this work, we consider alternative setups for training large models: using cheap “preemptible” instances or pooling existing resources from multiple regions. We analyze the performance of existing model-parallel algorithms in these conditions and find configurations where training larger models becomes less communication-intensive. Based on these findings, we propose SWARM Parallelism (Stochastically Wired Adaptively Rebalanced Model Parallelism), a model-parallel training algorithm designed for poorly connected, heterogeneous and unreliable devices. SWARM creates temporary randomized pipelines between nodes that are rebalanced in case of failure. We empirically validate our findings and compare SWARM Parallelism with existing large-scale training approaches. Finally, we combine our insights with compression strategies to train a large Transformer language model with 1B shared parameters ($\approx$13B before sharing) on preemptible T4 GPUs with less than 200 Mb/s network.
APA
Ryabinin, M., Dettmers, T., Diskin, M. & Borzunov, A.. (2023). SWARM Parallelism: Training Large Models Can Be Surprisingly Communication-Efficient. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:29416-29440 Available from https://proceedings.mlr.press/v202/ryabinin23a.html.

Related Material