MARINA: Faster Non-Convex Distributed Learning with Compression

Eduard Gorbunov, Konstantin P. Burlachenko, Zhize Li, Peter Richtarik
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3788-3798, 2021.

Abstract

We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy employed in the DIANA method of Mishchenko et al. (2019). Unlike virtually all competing distributed first-order methods, including DIANA, ours is based on a carefully designed biased gradient estimator, which is the key to its superior theoretical and practical performance. The communication complexity bounds we prove for MARINA are evidently better than those of all previous first-order methods. Further, we develop and analyze two variants of MARINA: VR-MARINA and PP-MARINA. The first method is designed for the case when the local loss functions owned by clients are either of a finite sum or of an expectation form, and the second method allows for a partial participation of clients {–} a feature important in federated learning. All our methods are superior to previous state-of-the-art methods in terms of oracle/communication complexity. Finally, we provide a convergence analysis of all methods for problems satisfying the Polyak-{Ł}ojasiewicz condition.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-gorbunov21a, title = {MARINA: Faster Non-Convex Distributed Learning with Compression}, author = {Gorbunov, Eduard and Burlachenko, Konstantin P. and Li, Zhize and Richtarik, Peter}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3788--3798}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/gorbunov21a/gorbunov21a.pdf}, url = {https://proceedings.mlr.press/v139/gorbunov21a.html}, abstract = {We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy employed in the DIANA method of Mishchenko et al. (2019). Unlike virtually all competing distributed first-order methods, including DIANA, ours is based on a carefully designed biased gradient estimator, which is the key to its superior theoretical and practical performance. The communication complexity bounds we prove for MARINA are evidently better than those of all previous first-order methods. Further, we develop and analyze two variants of MARINA: VR-MARINA and PP-MARINA. The first method is designed for the case when the local loss functions owned by clients are either of a finite sum or of an expectation form, and the second method allows for a partial participation of clients {–} a feature important in federated learning. All our methods are superior to previous state-of-the-art methods in terms of oracle/communication complexity. Finally, we provide a convergence analysis of all methods for problems satisfying the Polyak-{Ł}ojasiewicz condition.} }
Endnote
%0 Conference Paper %T MARINA: Faster Non-Convex Distributed Learning with Compression %A Eduard Gorbunov %A Konstantin P. Burlachenko %A Zhize Li %A Peter Richtarik %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-gorbunov21a %I PMLR %P 3788--3798 %U https://proceedings.mlr.press/v139/gorbunov21a.html %V 139 %X We develop and analyze MARINA: a new communication efficient method for non-convex distributed learning over heterogeneous datasets. MARINA employs a novel communication compression strategy based on the compression of gradient differences that is reminiscent of but different from the strategy employed in the DIANA method of Mishchenko et al. (2019). Unlike virtually all competing distributed first-order methods, including DIANA, ours is based on a carefully designed biased gradient estimator, which is the key to its superior theoretical and practical performance. The communication complexity bounds we prove for MARINA are evidently better than those of all previous first-order methods. Further, we develop and analyze two variants of MARINA: VR-MARINA and PP-MARINA. The first method is designed for the case when the local loss functions owned by clients are either of a finite sum or of an expectation form, and the second method allows for a partial participation of clients {–} a feature important in federated learning. All our methods are superior to previous state-of-the-art methods in terms of oracle/communication complexity. Finally, we provide a convergence analysis of all methods for problems satisfying the Polyak-{Ł}ojasiewicz condition.
APA
Gorbunov, E., Burlachenko, K.P., Li, Z. & Richtarik, P.. (2021). MARINA: Faster Non-Convex Distributed Learning with Compression. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3788-3798 Available from https://proceedings.mlr.press/v139/gorbunov21a.html.

Related Material