Acceleration in Distributed Optimization under Similarity

Ye Tian, Gesualdo Scutari, Tianyu Cao, Alexander Gasnikov
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:5721-5756, 2022.

Abstract

We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes. The loss functions of the agents are assumed to be similar, due to statistical data similarity or otherwise. In order to reduce the number of communications to reach a solution accuracy, we proposed a preconditioned, accelerated distributed method. An $\varepsilon$-solution is achieved in $\tilde{\mathcal{O}}\big(\sqrt{\frac{\beta/\mu}{1-\rho}}\log1/\varepsilon\big)$ number of communications steps, where $\beta/\mu$ is the relative condition number between the global and local loss functions, and $\rho$ characterizes the connectivity of the network. This rate matches (up to poly-log factors) lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest. Numerical results show significant communication savings with respect to existing accelerated distributed schemes, especially when solving ill-conditioned problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-tian22b, title = { Acceleration in Distributed Optimization under Similarity }, author = {Tian, Ye and Scutari, Gesualdo and Cao, Tianyu and Gasnikov, Alexander}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {5721--5756}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/tian22b/tian22b.pdf}, url = {https://proceedings.mlr.press/v151/tian22b.html}, abstract = { We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes. The loss functions of the agents are assumed to be similar, due to statistical data similarity or otherwise. In order to reduce the number of communications to reach a solution accuracy, we proposed a preconditioned, accelerated distributed method. An $\varepsilon$-solution is achieved in $\tilde{\mathcal{O}}\big(\sqrt{\frac{\beta/\mu}{1-\rho}}\log1/\varepsilon\big)$ number of communications steps, where $\beta/\mu$ is the relative condition number between the global and local loss functions, and $\rho$ characterizes the connectivity of the network. This rate matches (up to poly-log factors) lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest. Numerical results show significant communication savings with respect to existing accelerated distributed schemes, especially when solving ill-conditioned problems. } }
Endnote
%0 Conference Paper %T Acceleration in Distributed Optimization under Similarity %A Ye Tian %A Gesualdo Scutari %A Tianyu Cao %A Alexander Gasnikov %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-tian22b %I PMLR %P 5721--5756 %U https://proceedings.mlr.press/v151/tian22b.html %V 151 %X We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes. The loss functions of the agents are assumed to be similar, due to statistical data similarity or otherwise. In order to reduce the number of communications to reach a solution accuracy, we proposed a preconditioned, accelerated distributed method. An $\varepsilon$-solution is achieved in $\tilde{\mathcal{O}}\big(\sqrt{\frac{\beta/\mu}{1-\rho}}\log1/\varepsilon\big)$ number of communications steps, where $\beta/\mu$ is the relative condition number between the global and local loss functions, and $\rho$ characterizes the connectivity of the network. This rate matches (up to poly-log factors) lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest. Numerical results show significant communication savings with respect to existing accelerated distributed schemes, especially when solving ill-conditioned problems.
APA
Tian, Y., Scutari, G., Cao, T. & Gasnikov, A.. (2022). Acceleration in Distributed Optimization under Similarity . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:5721-5756 Available from https://proceedings.mlr.press/v151/tian22b.html.

Related Material