Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization

Jiaxiang Wu, Weidong Huang, Junzhou Huang, Tong Zhang
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5325-5333, 2018.

Abstract

Large-scale distributed optimization is of great importance in various applications. For data-parallel based distributed learning, the inter-node gradient communication often becomes the performance bottleneck. In this paper, we propose the error compensated quantized stochastic gradient descent algorithm to improve the training efficiency. Local gradients are quantized to reduce the communication overhead, and accumulated quantization error is utilized to speed up the convergence. Furthermore, we present theoretical analysis on the convergence behaviour, and demonstrate its advantage over competitors. Extensive experiments indicate that our algorithm can compress gradients by a factor of up to two magnitudes without performance degradation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-wu18d, title = {Error Compensated Quantized {SGD} and its Applications to Large-scale Distributed Optimization}, author = {Wu, Jiaxiang and Huang, Weidong and Huang, Junzhou and Zhang, Tong}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5325--5333}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/wu18d/wu18d.pdf}, url = {https://proceedings.mlr.press/v80/wu18d.html}, abstract = {Large-scale distributed optimization is of great importance in various applications. For data-parallel based distributed learning, the inter-node gradient communication often becomes the performance bottleneck. In this paper, we propose the error compensated quantized stochastic gradient descent algorithm to improve the training efficiency. Local gradients are quantized to reduce the communication overhead, and accumulated quantization error is utilized to speed up the convergence. Furthermore, we present theoretical analysis on the convergence behaviour, and demonstrate its advantage over competitors. Extensive experiments indicate that our algorithm can compress gradients by a factor of up to two magnitudes without performance degradation.} }
Endnote
%0 Conference Paper %T Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization %A Jiaxiang Wu %A Weidong Huang %A Junzhou Huang %A Tong Zhang %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-wu18d %I PMLR %P 5325--5333 %U https://proceedings.mlr.press/v80/wu18d.html %V 80 %X Large-scale distributed optimization is of great importance in various applications. For data-parallel based distributed learning, the inter-node gradient communication often becomes the performance bottleneck. In this paper, we propose the error compensated quantized stochastic gradient descent algorithm to improve the training efficiency. Local gradients are quantized to reduce the communication overhead, and accumulated quantization error is utilized to speed up the convergence. Furthermore, we present theoretical analysis on the convergence behaviour, and demonstrate its advantage over competitors. Extensive experiments indicate that our algorithm can compress gradients by a factor of up to two magnitudes without performance degradation.
APA
Wu, J., Huang, W., Huang, J. & Zhang, T.. (2018). Error Compensated Quantized SGD and its Applications to Large-scale Distributed Optimization. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5325-5333 Available from https://proceedings.mlr.press/v80/wu18d.html.

Related Material