Accumulated Decoupled Learning with Gradient Staleness Mitigation for Convolutional Neural Networks

Huiping Zhuang, Zhenyu Weng, Fulin Luo, Toh Kar-Ann, Haizhou Li, Zhiping Lin
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:12935-12944, 2021.

Abstract

Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), which includes a module-wise gradient accumulation in order to mitigate the gradient staleness. Unlike prior arts ignoring the gradient staleness, we quantify the staleness in such a way that its mitigation can be quantitatively visualized. As a new learning scheme, the proposed ADL is theoretically shown to converge to critical points in spite of its asynchronism. Extensive experiments on CIFAR-10 and ImageNet datasets are conducted, demonstrating that ADL gives promising generalization results while the state-of-the-art methods experience reduced generalization and divergence. In addition, our ADL is shown to have the fastest training speed among the compared methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-zhuang21a, title = {Accumulated Decoupled Learning with Gradient Staleness Mitigation for Convolutional Neural Networks}, author = {Zhuang, Huiping and Weng, Zhenyu and Luo, Fulin and Kar-Ann, Toh and Li, Haizhou and Lin, Zhiping}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {12935--12944}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/zhuang21a/zhuang21a.pdf}, url = {https://proceedings.mlr.press/v139/zhuang21a.html}, abstract = {Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), which includes a module-wise gradient accumulation in order to mitigate the gradient staleness. Unlike prior arts ignoring the gradient staleness, we quantify the staleness in such a way that its mitigation can be quantitatively visualized. As a new learning scheme, the proposed ADL is theoretically shown to converge to critical points in spite of its asynchronism. Extensive experiments on CIFAR-10 and ImageNet datasets are conducted, demonstrating that ADL gives promising generalization results while the state-of-the-art methods experience reduced generalization and divergence. In addition, our ADL is shown to have the fastest training speed among the compared methods.} }
Endnote
%0 Conference Paper %T Accumulated Decoupled Learning with Gradient Staleness Mitigation for Convolutional Neural Networks %A Huiping Zhuang %A Zhenyu Weng %A Fulin Luo %A Toh Kar-Ann %A Haizhou Li %A Zhiping Lin %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-zhuang21a %I PMLR %P 12935--12944 %U https://proceedings.mlr.press/v139/zhuang21a.html %V 139 %X Gradient staleness is a major side effect in decoupled learning when training convolutional neural networks asynchronously. Existing methods that ignore this effect might result in reduced generalization and even divergence. In this paper, we propose an accumulated decoupled learning (ADL), which includes a module-wise gradient accumulation in order to mitigate the gradient staleness. Unlike prior arts ignoring the gradient staleness, we quantify the staleness in such a way that its mitigation can be quantitatively visualized. As a new learning scheme, the proposed ADL is theoretically shown to converge to critical points in spite of its asynchronism. Extensive experiments on CIFAR-10 and ImageNet datasets are conducted, demonstrating that ADL gives promising generalization results while the state-of-the-art methods experience reduced generalization and divergence. In addition, our ADL is shown to have the fastest training speed among the compared methods.
APA
Zhuang, H., Weng, Z., Luo, F., Kar-Ann, T., Li, H. & Lin, Z.. (2021). Accumulated Decoupled Learning with Gradient Staleness Mitigation for Convolutional Neural Networks. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:12935-12944 Available from https://proceedings.mlr.press/v139/zhuang21a.html.

Related Material