Memory-Efficient Pipeline-Parallel DNN Training

Deepak Narayanan, Amar Phanishayee, Kaiyu Shi, Xie Chen, Matei Zaharia
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7937-7947, 2021.

Abstract

Many state-of-the-art ML results have been obtained by scaling up the number of parameters in existing models. However, parameters and activations for such large models often do not fit in the memory of a single accelerator device; this means that it is necessary to distribute training of large models over multiple accelerators. In this work, we propose PipeDream-2BW, a system that supports memory-efficient pipeline parallelism. PipeDream-2BW uses a novel pipelining and weight gradient coalescing strategy, combined with the double buffering of weights, to ensure high throughput, low memory footprint, and weight update semantics similar to data parallelism. In addition, PipeDream-2BW automatically partitions the model over the available hardware resources, while respecting hardware constraints such as memory capacities of accelerators and interconnect topologies. PipeDream-2BW can accelerate the training of large GPT and BERT language models by up to 20x with similar final model accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-narayanan21a, title = {Memory-Efficient Pipeline-Parallel DNN Training}, author = {Narayanan, Deepak and Phanishayee, Amar and Shi, Kaiyu and Chen, Xie and Zaharia, Matei}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7937--7947}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/narayanan21a/narayanan21a.pdf}, url = {https://proceedings.mlr.press/v139/narayanan21a.html}, abstract = {Many state-of-the-art ML results have been obtained by scaling up the number of parameters in existing models. However, parameters and activations for such large models often do not fit in the memory of a single accelerator device; this means that it is necessary to distribute training of large models over multiple accelerators. In this work, we propose PipeDream-2BW, a system that supports memory-efficient pipeline parallelism. PipeDream-2BW uses a novel pipelining and weight gradient coalescing strategy, combined with the double buffering of weights, to ensure high throughput, low memory footprint, and weight update semantics similar to data parallelism. In addition, PipeDream-2BW automatically partitions the model over the available hardware resources, while respecting hardware constraints such as memory capacities of accelerators and interconnect topologies. PipeDream-2BW can accelerate the training of large GPT and BERT language models by up to 20x with similar final model accuracy.} }
Endnote
%0 Conference Paper %T Memory-Efficient Pipeline-Parallel DNN Training %A Deepak Narayanan %A Amar Phanishayee %A Kaiyu Shi %A Xie Chen %A Matei Zaharia %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-narayanan21a %I PMLR %P 7937--7947 %U https://proceedings.mlr.press/v139/narayanan21a.html %V 139 %X Many state-of-the-art ML results have been obtained by scaling up the number of parameters in existing models. However, parameters and activations for such large models often do not fit in the memory of a single accelerator device; this means that it is necessary to distribute training of large models over multiple accelerators. In this work, we propose PipeDream-2BW, a system that supports memory-efficient pipeline parallelism. PipeDream-2BW uses a novel pipelining and weight gradient coalescing strategy, combined with the double buffering of weights, to ensure high throughput, low memory footprint, and weight update semantics similar to data parallelism. In addition, PipeDream-2BW automatically partitions the model over the available hardware resources, while respecting hardware constraints such as memory capacities of accelerators and interconnect topologies. PipeDream-2BW can accelerate the training of large GPT and BERT language models by up to 20x with similar final model accuracy.
APA
Narayanan, D., Phanishayee, A., Shi, K., Chen, X. & Zaharia, M.. (2021). Memory-Efficient Pipeline-Parallel DNN Training. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7937-7947 Available from https://proceedings.mlr.press/v139/narayanan21a.html.

Related Material