A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free!

Dmitry Kovalev, Anastasia Koloskova, Martin Jaggi, Peter Richtarik, Sebastian Stich
Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, PMLR 130:4087-4095, 2021.

Abstract

Decentralized optimization methods enable on-device training of machine learning models without a central coordinator. In many scenarios communication between devices is energy demanding and time consuming and forms the bottleneck of the entire system. We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators to the communicated messages. By combining our scheme with a new variance reduction technique that progressively throughout the iterations reduces the adverse effect of the injected quantization noise, we obtain a scheme that converges linearly on strongly convex decentralized problems while using compressed communication only. We prove that our method can solve the problems without any increase in the number of communications compared to the baseline which does not perform any communication compression while still allowing for a significant compression factor which depends on the conditioning of the problem and the topology of the network. We confirm our theoretical findings in numerical experiments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v130-kovalev21a, title = { A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! }, author = {Kovalev, Dmitry and Koloskova, Anastasia and Jaggi, Martin and Richtarik, Peter and Stich, Sebastian}, booktitle = {Proceedings of The 24th International Conference on Artificial Intelligence and Statistics}, pages = {4087--4095}, year = {2021}, editor = {Banerjee, Arindam and Fukumizu, Kenji}, volume = {130}, series = {Proceedings of Machine Learning Research}, month = {13--15 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v130/kovalev21a/kovalev21a.pdf}, url = {https://proceedings.mlr.press/v130/kovalev21a.html}, abstract = { Decentralized optimization methods enable on-device training of machine learning models without a central coordinator. In many scenarios communication between devices is energy demanding and time consuming and forms the bottleneck of the entire system. We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators to the communicated messages. By combining our scheme with a new variance reduction technique that progressively throughout the iterations reduces the adverse effect of the injected quantization noise, we obtain a scheme that converges linearly on strongly convex decentralized problems while using compressed communication only. We prove that our method can solve the problems without any increase in the number of communications compared to the baseline which does not perform any communication compression while still allowing for a significant compression factor which depends on the conditioning of the problem and the topology of the network. We confirm our theoretical findings in numerical experiments. } }
Endnote
%0 Conference Paper %T A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! %A Dmitry Kovalev %A Anastasia Koloskova %A Martin Jaggi %A Peter Richtarik %A Sebastian Stich %B Proceedings of The 24th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2021 %E Arindam Banerjee %E Kenji Fukumizu %F pmlr-v130-kovalev21a %I PMLR %P 4087--4095 %U https://proceedings.mlr.press/v130/kovalev21a.html %V 130 %X Decentralized optimization methods enable on-device training of machine learning models without a central coordinator. In many scenarios communication between devices is energy demanding and time consuming and forms the bottleneck of the entire system. We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators to the communicated messages. By combining our scheme with a new variance reduction technique that progressively throughout the iterations reduces the adverse effect of the injected quantization noise, we obtain a scheme that converges linearly on strongly convex decentralized problems while using compressed communication only. We prove that our method can solve the problems without any increase in the number of communications compared to the baseline which does not perform any communication compression while still allowing for a significant compression factor which depends on the conditioning of the problem and the topology of the network. We confirm our theoretical findings in numerical experiments.
APA
Kovalev, D., Koloskova, A., Jaggi, M., Richtarik, P. & Stich, S.. (2021). A Linearly Convergent Algorithm for Decentralized Optimization: Sending Less Bits for Free! . Proceedings of The 24th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 130:4087-4095 Available from https://proceedings.mlr.press/v130/kovalev21a.html.

Related Material