Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks

Jinming Xu, Ye Tian, Ying Sun, Gesualdo Scutari
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2381-2391, 2020.

Abstract

This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications. The algorithms can also employ acceleration on the computation and communications. We provide a unified analysis of their convergence rate, measured in terms of the Bregman distance associated to the saddle point reformation of the distributed optimization problem. When acceleration is employed, the rate is shown to be optimal, in the sense that it matches (under the proposed metric) existing complexity lower bounds of distributed algorithms applicable to such a class of problem and using only gradient information and gossip communications. Preliminary numerical results on distributed least-square regression problems show that the proposed algorithm compares favorably on existing distributed schemes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-xu20b, title = {Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks}, author = {Xu, Jinming and Tian, Ye and Sun, Ying and Scutari, Gesualdo}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2381--2391}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/xu20b/xu20b.pdf}, url = {https://proceedings.mlr.press/v108/xu20b.html}, abstract = {This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications. The algorithms can also employ acceleration on the computation and communications. We provide a unified analysis of their convergence rate, measured in terms of the Bregman distance associated to the saddle point reformation of the distributed optimization problem. When acceleration is employed, the rate is shown to be optimal, in the sense that it matches (under the proposed metric) existing complexity lower bounds of distributed algorithms applicable to such a class of problem and using only gradient information and gossip communications. Preliminary numerical results on distributed least-square regression problems show that the proposed algorithm compares favorably on existing distributed schemes.} }
Endnote
%0 Conference Paper %T Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks %A Jinming Xu %A Ye Tian %A Ying Sun %A Gesualdo Scutari %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-xu20b %I PMLR %P 2381--2391 %U https://proceedings.mlr.press/v108/xu20b.html %V 108 %X This paper proposes a novel family of primal-dual-based distributed algorithms for smooth, convex, multi-agent optimization over networks that uses only gradient information and gossip communications. The algorithms can also employ acceleration on the computation and communications. We provide a unified analysis of their convergence rate, measured in terms of the Bregman distance associated to the saddle point reformation of the distributed optimization problem. When acceleration is employed, the rate is shown to be optimal, in the sense that it matches (under the proposed metric) existing complexity lower bounds of distributed algorithms applicable to such a class of problem and using only gradient information and gossip communications. Preliminary numerical results on distributed least-square regression problems show that the proposed algorithm compares favorably on existing distributed schemes.
APA
Xu, J., Tian, Y., Sun, Y. & Scutari, G.. (2020). Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2381-2391 Available from https://proceedings.mlr.press/v108/xu20b.html.

Related Material