Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers

Yuxing Liu, Lesi Chen, Luo Luo
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:30807-30841, 2024.

Abstract

This paper studies decentralized optimization problem, where the local objective on each node is an average of a finite set of convex functions and the global function is strongly convex. We propose an efficient stochastic variance reduced first-order method that allows the different nodes to establish their stochastic local gradient estimator with different mini-batch sizes per iteration. We prove the upper bound on the computation time of the proposed method contains the dependence on the global condition number, which is sharper than the previous results that only depend on the local condition numbers. Compared with the state-of-the-art methods, we also show that our method requires less local incremental first-order oracle calls and comparable communication cost. We further perform numerical experiments to validate the advantage of our method.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-liu24i, title = {Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers}, author = {Liu, Yuxing and Chen, Lesi and Luo, Luo}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {30807--30841}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/liu24i/liu24i.pdf}, url = {https://proceedings.mlr.press/v235/liu24i.html}, abstract = {This paper studies decentralized optimization problem, where the local objective on each node is an average of a finite set of convex functions and the global function is strongly convex. We propose an efficient stochastic variance reduced first-order method that allows the different nodes to establish their stochastic local gradient estimator with different mini-batch sizes per iteration. We prove the upper bound on the computation time of the proposed method contains the dependence on the global condition number, which is sharper than the previous results that only depend on the local condition numbers. Compared with the state-of-the-art methods, we also show that our method requires less local incremental first-order oracle calls and comparable communication cost. We further perform numerical experiments to validate the advantage of our method.} }
Endnote
%0 Conference Paper %T Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers %A Yuxing Liu %A Lesi Chen %A Luo Luo %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-liu24i %I PMLR %P 30807--30841 %U https://proceedings.mlr.press/v235/liu24i.html %V 235 %X This paper studies decentralized optimization problem, where the local objective on each node is an average of a finite set of convex functions and the global function is strongly convex. We propose an efficient stochastic variance reduced first-order method that allows the different nodes to establish their stochastic local gradient estimator with different mini-batch sizes per iteration. We prove the upper bound on the computation time of the proposed method contains the dependence on the global condition number, which is sharper than the previous results that only depend on the local condition numbers. Compared with the state-of-the-art methods, we also show that our method requires less local incremental first-order oracle calls and comparable communication cost. We further perform numerical experiments to validate the advantage of our method.
APA
Liu, Y., Chen, L. & Luo, L.. (2024). Decentralized Convex Finite-Sum Optimization with Better Dependence on Condition Numbers. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:30807-30841 Available from https://proceedings.mlr.press/v235/liu24i.html.

Related Material