Selective Collaboration for Robust Federated Learning

Nazarii Tupitsa, Samuel Horváth, Martin Takáč, Eduard Gorbunov
Conference on Parsimony and Learning, PMLR 328:1161-1194, 2026.

Abstract

Federated Learning (FL) revolutionizes machine learning by enabling model training across decentralized data sources without aggregating sensitive client data. However, the inherent heterogeneity of client data presents unique challenges, as not all client contributions positively impact model performance. In this work, we propose a novel algorithm, Merit-Based Federated Averaging (\Algn), which dynamically assigns aggregation weights to clients based on their data distribution’s relevance to a target objective. By leveraging stochastic gradients and solving an auxiliary optimization problem, our method adaptively identifies beneficial collaborators, ensuring efficient and robust learning. We establish theoretical convergence guarantees under mild assumptions and demonstrate that \Algn achieves superior convergence by harnessing the advantages of diverse yet complementary datasets. Empirical evaluations highlight its ability to mitigate the adverse effects of outlier and adversarial clients, paving the way for more effective and resilient FL in heterogeneous environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v328-tupitsa26a, title = {Selective Collaboration for Robust Federated Learning}, author = {Tupitsa, Nazarii and Horv\'{a}th, Samuel and Tak\'{a}\v{c}, Martin and Gorbunov, Eduard}, booktitle = {Conference on Parsimony and Learning}, pages = {1161--1194}, year = {2026}, editor = {Burkholz, Rebekka and Liu, Shiwei and Ravishankar, Saiprasad and Redman, William and Huang, Wei and Su, Weijie and Zhu, Zhihui}, volume = {328}, series = {Proceedings of Machine Learning Research}, month = {23--26 Mar}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v328/main/assets/tupitsa26a/tupitsa26a.pdf}, url = {https://proceedings.mlr.press/v328/tupitsa26a.html}, abstract = {Federated Learning (FL) revolutionizes machine learning by enabling model training across decentralized data sources without aggregating sensitive client data. However, the inherent heterogeneity of client data presents unique challenges, as not all client contributions positively impact model performance. In this work, we propose a novel algorithm, Merit-Based Federated Averaging (\Algn), which dynamically assigns aggregation weights to clients based on their data distribution’s relevance to a target objective. By leveraging stochastic gradients and solving an auxiliary optimization problem, our method adaptively identifies beneficial collaborators, ensuring efficient and robust learning. We establish theoretical convergence guarantees under mild assumptions and demonstrate that \Algn achieves superior convergence by harnessing the advantages of diverse yet complementary datasets. Empirical evaluations highlight its ability to mitigate the adverse effects of outlier and adversarial clients, paving the way for more effective and resilient FL in heterogeneous environments.} }
Endnote
%0 Conference Paper %T Selective Collaboration for Robust Federated Learning %A Nazarii Tupitsa %A Samuel Horváth %A Martin Takáč %A Eduard Gorbunov %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2026 %E Rebekka Burkholz %E Shiwei Liu %E Saiprasad Ravishankar %E William Redman %E Wei Huang %E Weijie Su %E Zhihui Zhu %F pmlr-v328-tupitsa26a %I PMLR %P 1161--1194 %U https://proceedings.mlr.press/v328/tupitsa26a.html %V 328 %X Federated Learning (FL) revolutionizes machine learning by enabling model training across decentralized data sources without aggregating sensitive client data. However, the inherent heterogeneity of client data presents unique challenges, as not all client contributions positively impact model performance. In this work, we propose a novel algorithm, Merit-Based Federated Averaging (\Algn), which dynamically assigns aggregation weights to clients based on their data distribution’s relevance to a target objective. By leveraging stochastic gradients and solving an auxiliary optimization problem, our method adaptively identifies beneficial collaborators, ensuring efficient and robust learning. We establish theoretical convergence guarantees under mild assumptions and demonstrate that \Algn achieves superior convergence by harnessing the advantages of diverse yet complementary datasets. Empirical evaluations highlight its ability to mitigate the adverse effects of outlier and adversarial clients, paving the way for more effective and resilient FL in heterogeneous environments.
APA
Tupitsa, N., Horváth, S., Takáč, M. & Gorbunov, E.. (2026). Selective Collaboration for Robust Federated Learning. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 328:1161-1194 Available from https://proceedings.mlr.press/v328/tupitsa26a.html.

Related Material