A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning

Konstantin Mishchenko, Franck Iutzeler, Jérôme Malick, Massih-Reza Amini
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:3587-3595, 2018.

Abstract

Distributed learning aims at computing high-quality models by training over scattered data. This covers a diversity of scenarios, including computer clusters or mobile agents. One of the main challenges is then to deal with heterogeneous machines and unreliable communications. In this setting, we propose and analyze a flexible asynchronous optimization algorithm for solving nonsmooth learning problems. Unlike most existing methods, our algorithm is adjustable to various levels of communication costs, machines computational powers, and data distribution evenness. We prove that the algorithm converges linearly with a fixed learning rate that does not depend on communication delays nor on the number of machines. Although long delays in communication may slow down performance, no delay can break convergence.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-mishchenko18a, title = {A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning}, author = {Mishchenko, Konstantin and Iutzeler, Franck and Malick, J{\'e}r{\^o}me and Amini, Massih-Reza}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {3587--3595}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/mishchenko18a/mishchenko18a.pdf}, url = {https://proceedings.mlr.press/v80/mishchenko18a.html}, abstract = {Distributed learning aims at computing high-quality models by training over scattered data. This covers a diversity of scenarios, including computer clusters or mobile agents. One of the main challenges is then to deal with heterogeneous machines and unreliable communications. In this setting, we propose and analyze a flexible asynchronous optimization algorithm for solving nonsmooth learning problems. Unlike most existing methods, our algorithm is adjustable to various levels of communication costs, machines computational powers, and data distribution evenness. We prove that the algorithm converges linearly with a fixed learning rate that does not depend on communication delays nor on the number of machines. Although long delays in communication may slow down performance, no delay can break convergence.} }
Endnote
%0 Conference Paper %T A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning %A Konstantin Mishchenko %A Franck Iutzeler %A Jérôme Malick %A Massih-Reza Amini %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-mishchenko18a %I PMLR %P 3587--3595 %U https://proceedings.mlr.press/v80/mishchenko18a.html %V 80 %X Distributed learning aims at computing high-quality models by training over scattered data. This covers a diversity of scenarios, including computer clusters or mobile agents. One of the main challenges is then to deal with heterogeneous machines and unreliable communications. In this setting, we propose and analyze a flexible asynchronous optimization algorithm for solving nonsmooth learning problems. Unlike most existing methods, our algorithm is adjustable to various levels of communication costs, machines computational powers, and data distribution evenness. We prove that the algorithm converges linearly with a fixed learning rate that does not depend on communication delays nor on the number of machines. Although long delays in communication may slow down performance, no delay can break convergence.
APA
Mishchenko, K., Iutzeler, F., Malick, J. & Amini, M.. (2018). A Delay-tolerant Proximal-Gradient Algorithm for Distributed Learning. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:3587-3595 Available from https://proceedings.mlr.press/v80/mishchenko18a.html.

Related Material