Distributed Event-Based Learning via ADMM

Guner Dilsad Er, Sebastian Trimpe, Michael Muehlebach
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:15384-15418, 2025.

Abstract

We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network. Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents. We can therefore guarantee convergence even if the local data-distributions of the agents are arbitrarily distinct. We analyze the convergence rate of the algorithm both in convex and nonconvex settings and derive accelerated convergence rates in a convex setting. We also characterize the effect of communication failures and demonstrate that our algorithm is robust to communication failures. The article concludes by presenting numerical results from distributed learning tasks on the MNIST and CIFAR-10 datasets. The experiments underline communication savings of 35% or more due to the event-based communication strategy, show resilience towards heterogeneous data-distributions, and highlight that our approach outperforms common baselines such as FedAvg, FedProx, SCAFFOLD and FedADMM.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-er25a, title = {Distributed Event-Based Learning via {ADMM}}, author = {Er, Guner Dilsad and Trimpe, Sebastian and Muehlebach, Michael}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {15384--15418}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/er25a/er25a.pdf}, url = {https://proceedings.mlr.press/v267/er25a.html}, abstract = {We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network. Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents. We can therefore guarantee convergence even if the local data-distributions of the agents are arbitrarily distinct. We analyze the convergence rate of the algorithm both in convex and nonconvex settings and derive accelerated convergence rates in a convex setting. We also characterize the effect of communication failures and demonstrate that our algorithm is robust to communication failures. The article concludes by presenting numerical results from distributed learning tasks on the MNIST and CIFAR-10 datasets. The experiments underline communication savings of 35% or more due to the event-based communication strategy, show resilience towards heterogeneous data-distributions, and highlight that our approach outperforms common baselines such as FedAvg, FedProx, SCAFFOLD and FedADMM.} }
Endnote
%0 Conference Paper %T Distributed Event-Based Learning via ADMM %A Guner Dilsad Er %A Sebastian Trimpe %A Michael Muehlebach %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-er25a %I PMLR %P 15384--15418 %U https://proceedings.mlr.press/v267/er25a.html %V 267 %X We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network. Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents. We can therefore guarantee convergence even if the local data-distributions of the agents are arbitrarily distinct. We analyze the convergence rate of the algorithm both in convex and nonconvex settings and derive accelerated convergence rates in a convex setting. We also characterize the effect of communication failures and demonstrate that our algorithm is robust to communication failures. The article concludes by presenting numerical results from distributed learning tasks on the MNIST and CIFAR-10 datasets. The experiments underline communication savings of 35% or more due to the event-based communication strategy, show resilience towards heterogeneous data-distributions, and highlight that our approach outperforms common baselines such as FedAvg, FedProx, SCAFFOLD and FedADMM.
APA
Er, G.D., Trimpe, S. & Muehlebach, M.. (2025). Distributed Event-Based Learning via ADMM. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:15384-15418 Available from https://proceedings.mlr.press/v267/er25a.html.

Related Material