Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication

Anastasia Koloskova, Sebastian Stich, Martin Jaggi
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:3478-3487, 2019.

Abstract

We consider decentralized stochastic optimization with the objective function (e.g. data samples for machine learning tasks) being distributed over n machines that can only communicate to their neighbors on a fixed communication graph. To address the communication bottleneck, the nodes compress (e.g. quantize or sparsify) their model updates. We cover both unbiased and biased compression operators with quality denoted by \delta <= 1 (\delta=1 meaning no compression). We (i) propose a novel gossip-based stochastic gradient descent algorithm, CHOCO-SGD, that converges at rate O(1/(nT) + 1/(T \rho^2 \delta)^2) for strongly convex objectives, where T denotes the number of iterations and \rho the eigengap of the connectivity matrix. We (ii) present a novel gossip algorithm, CHOCO-GOSSIP, for the average consensus problem that converges in time O(1/(\rho^2\delta) \log (1/\epsilon)) for accuracy \epsilon > 0. This is (up to our knowledge) the first gossip algorithm that supports arbitrary compressed messages for \delta > 0 and still exhibits linear convergence. We (iii) show in experiments that both of our algorithms do outperform the respective state-of-the-art baselines and CHOCO-SGD can reduce communication by at least two orders of magnitudes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-koloskova19a, title = {Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication}, author = {Koloskova, Anastasia and Stich, Sebastian and Jaggi, Martin}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {3478--3487}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/koloskova19a/koloskova19a.pdf}, url = {https://proceedings.mlr.press/v97/koloskova19a.html}, abstract = {We consider decentralized stochastic optimization with the objective function (e.g. data samples for machine learning tasks) being distributed over n machines that can only communicate to their neighbors on a fixed communication graph. To address the communication bottleneck, the nodes compress (e.g. quantize or sparsify) their model updates. We cover both unbiased and biased compression operators with quality denoted by \delta <= 1 (\delta=1 meaning no compression). We (i) propose a novel gossip-based stochastic gradient descent algorithm, CHOCO-SGD, that converges at rate O(1/(nT) + 1/(T \rho^2 \delta)^2) for strongly convex objectives, where T denotes the number of iterations and \rho the eigengap of the connectivity matrix. We (ii) present a novel gossip algorithm, CHOCO-GOSSIP, for the average consensus problem that converges in time O(1/(\rho^2\delta) \log (1/\epsilon)) for accuracy \epsilon > 0. This is (up to our knowledge) the first gossip algorithm that supports arbitrary compressed messages for \delta > 0 and still exhibits linear convergence. We (iii) show in experiments that both of our algorithms do outperform the respective state-of-the-art baselines and CHOCO-SGD can reduce communication by at least two orders of magnitudes.} }
Endnote
%0 Conference Paper %T Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication %A Anastasia Koloskova %A Sebastian Stich %A Martin Jaggi %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-koloskova19a %I PMLR %P 3478--3487 %U https://proceedings.mlr.press/v97/koloskova19a.html %V 97 %X We consider decentralized stochastic optimization with the objective function (e.g. data samples for machine learning tasks) being distributed over n machines that can only communicate to their neighbors on a fixed communication graph. To address the communication bottleneck, the nodes compress (e.g. quantize or sparsify) their model updates. We cover both unbiased and biased compression operators with quality denoted by \delta <= 1 (\delta=1 meaning no compression). We (i) propose a novel gossip-based stochastic gradient descent algorithm, CHOCO-SGD, that converges at rate O(1/(nT) + 1/(T \rho^2 \delta)^2) for strongly convex objectives, where T denotes the number of iterations and \rho the eigengap of the connectivity matrix. We (ii) present a novel gossip algorithm, CHOCO-GOSSIP, for the average consensus problem that converges in time O(1/(\rho^2\delta) \log (1/\epsilon)) for accuracy \epsilon > 0. This is (up to our knowledge) the first gossip algorithm that supports arbitrary compressed messages for \delta > 0 and still exhibits linear convergence. We (iii) show in experiments that both of our algorithms do outperform the respective state-of-the-art baselines and CHOCO-SGD can reduce communication by at least two orders of magnitudes.
APA
Koloskova, A., Stich, S. & Jaggi, M.. (2019). Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:3478-3487 Available from https://proceedings.mlr.press/v97/koloskova19a.html.

Related Material