Asynchronous Stochastic Gradient Descent with Delay Compensation

Shuxin Zheng, Qi Meng, Taifeng Wang, Wei Chen, Nenghai Yu, Zhi-Ming Ma, Tie-Yan Liu
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:4120-4129, 2017.

Abstract

With the fast development of deep learning, it has become common to learn big neural networks using massive training data. Asynchronous Stochastic Gradient Descent (ASGD) is widely adopted to fulfill this task for its efficiency, which is, however, known to suffer from the problem of delayed gradients. That is, when a local worker adds its gradient to the global model, the global model may have been updated by other workers and this gradient becomes “delayed”. We propose a novel technology to compensate this delay, so as to make the optimization behavior of ASGD closer to that of sequential SGD. This is achieved by leveraging Taylor expansion of the gradient function and efficient approximators to the Hessian matrix of the loss function. We call the new algorithm Delay Compensated ASGD (DC-ASGD). We evaluated the proposed algorithm on CIFAR-10 and ImageNet datasets, and the experimental results demonstrate that DC-ASGD outperforms both synchronous SGD and asynchronous SGD, and nearly approaches the performance of sequential SGD.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-zheng17b, title = {Asynchronous Stochastic Gradient Descent with Delay Compensation}, author = {Shuxin Zheng and Qi Meng and Taifeng Wang and Wei Chen and Nenghai Yu and Zhi-Ming Ma and Tie-Yan Liu}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {4120--4129}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/zheng17b/zheng17b.pdf}, url = {https://proceedings.mlr.press/v70/zheng17b.html}, abstract = {With the fast development of deep learning, it has become common to learn big neural networks using massive training data. Asynchronous Stochastic Gradient Descent (ASGD) is widely adopted to fulfill this task for its efficiency, which is, however, known to suffer from the problem of delayed gradients. That is, when a local worker adds its gradient to the global model, the global model may have been updated by other workers and this gradient becomes “delayed”. We propose a novel technology to compensate this delay, so as to make the optimization behavior of ASGD closer to that of sequential SGD. This is achieved by leveraging Taylor expansion of the gradient function and efficient approximators to the Hessian matrix of the loss function. We call the new algorithm Delay Compensated ASGD (DC-ASGD). We evaluated the proposed algorithm on CIFAR-10 and ImageNet datasets, and the experimental results demonstrate that DC-ASGD outperforms both synchronous SGD and asynchronous SGD, and nearly approaches the performance of sequential SGD.} }
Endnote
%0 Conference Paper %T Asynchronous Stochastic Gradient Descent with Delay Compensation %A Shuxin Zheng %A Qi Meng %A Taifeng Wang %A Wei Chen %A Nenghai Yu %A Zhi-Ming Ma %A Tie-Yan Liu %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-zheng17b %I PMLR %P 4120--4129 %U https://proceedings.mlr.press/v70/zheng17b.html %V 70 %X With the fast development of deep learning, it has become common to learn big neural networks using massive training data. Asynchronous Stochastic Gradient Descent (ASGD) is widely adopted to fulfill this task for its efficiency, which is, however, known to suffer from the problem of delayed gradients. That is, when a local worker adds its gradient to the global model, the global model may have been updated by other workers and this gradient becomes “delayed”. We propose a novel technology to compensate this delay, so as to make the optimization behavior of ASGD closer to that of sequential SGD. This is achieved by leveraging Taylor expansion of the gradient function and efficient approximators to the Hessian matrix of the loss function. We call the new algorithm Delay Compensated ASGD (DC-ASGD). We evaluated the proposed algorithm on CIFAR-10 and ImageNet datasets, and the experimental results demonstrate that DC-ASGD outperforms both synchronous SGD and asynchronous SGD, and nearly approaches the performance of sequential SGD.
APA
Zheng, S., Meng, Q., Wang, T., Chen, W., Yu, N., Ma, Z. & Liu, T.. (2017). Asynchronous Stochastic Gradient Descent with Delay Compensation. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:4120-4129 Available from https://proceedings.mlr.press/v70/zheng17b.html.

Related Material