Towards Trustworthy Federated Learning with Untrusted Participants

Youssef Allouah, Rachid Guerraoui, John Stephan
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:1184-1227, 2025.

Abstract

Resilience against malicious participants and data privacy are essential for trustworthy federated learning, yet achieving both with good utility typically requires the strong assumption of a trusted central server. This paper shows that a significantly weaker assumption suffices: each pair of participants shares a randomness seed unknown to others. In a setting where malicious participants may collude with an untrusted server, we propose CafCor, an algorithm that integrates robust gradient aggregation with correlated noise injection, using shared randomness between participants. We prove that CafCor achieves strong privacy-utility trade-offs, significantly outperforming local differential privacy (DP) methods, which do not make any trust assumption, while approaching central DP utility, where the server is fully trusted. Empirical results on standard benchmarks validate CafCor’s practicality, showing that privacy and robustness can coexist in distributed systems without sacrificing utility or trusting the server.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-allouah25a, title = {Towards Trustworthy Federated Learning with Untrusted Participants}, author = {Allouah, Youssef and Guerraoui, Rachid and Stephan, John}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {1184--1227}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/allouah25a/allouah25a.pdf}, url = {https://proceedings.mlr.press/v267/allouah25a.html}, abstract = {Resilience against malicious participants and data privacy are essential for trustworthy federated learning, yet achieving both with good utility typically requires the strong assumption of a trusted central server. This paper shows that a significantly weaker assumption suffices: each pair of participants shares a randomness seed unknown to others. In a setting where malicious participants may collude with an untrusted server, we propose CafCor, an algorithm that integrates robust gradient aggregation with correlated noise injection, using shared randomness between participants. We prove that CafCor achieves strong privacy-utility trade-offs, significantly outperforming local differential privacy (DP) methods, which do not make any trust assumption, while approaching central DP utility, where the server is fully trusted. Empirical results on standard benchmarks validate CafCor’s practicality, showing that privacy and robustness can coexist in distributed systems without sacrificing utility or trusting the server.} }
Endnote
%0 Conference Paper %T Towards Trustworthy Federated Learning with Untrusted Participants %A Youssef Allouah %A Rachid Guerraoui %A John Stephan %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-allouah25a %I PMLR %P 1184--1227 %U https://proceedings.mlr.press/v267/allouah25a.html %V 267 %X Resilience against malicious participants and data privacy are essential for trustworthy federated learning, yet achieving both with good utility typically requires the strong assumption of a trusted central server. This paper shows that a significantly weaker assumption suffices: each pair of participants shares a randomness seed unknown to others. In a setting where malicious participants may collude with an untrusted server, we propose CafCor, an algorithm that integrates robust gradient aggregation with correlated noise injection, using shared randomness between participants. We prove that CafCor achieves strong privacy-utility trade-offs, significantly outperforming local differential privacy (DP) methods, which do not make any trust assumption, while approaching central DP utility, where the server is fully trusted. Empirical results on standard benchmarks validate CafCor’s practicality, showing that privacy and robustness can coexist in distributed systems without sacrificing utility or trusting the server.
APA
Allouah, Y., Guerraoui, R. & Stephan, J.. (2025). Towards Trustworthy Federated Learning with Untrusted Participants. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:1184-1227 Available from https://proceedings.mlr.press/v267/allouah25a.html.

Related Material