SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification

Ashwinee Panda, Saeed Mahloujifar, Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7587-7624, 2022.

Abstract

Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices. In model poisoning attacks, the attacker reduces the model’s performance on targeted sub-tasks (e.g. classifying planes as birds) by uploading "poisoned" updates. In this paper we introduce SparseFed, a novel defense that uses global top-k update sparsification and device-level gradient clipping to mitigate model poisoning attacks. We propose a theoretical framework for analyzing the robustness of defenses against poisoning attacks, and provide robustness and convergence analysis of our algorithm. To validate its empirical efficacy we conduct an open-source evaluation at scale across multiple benchmark datasets for computer vision and federated learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-panda22a, title = { SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification }, author = {Panda, Ashwinee and Mahloujifar, Saeed and Nitin Bhagoji, Arjun and Chakraborty, Supriyo and Mittal, Prateek}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7587--7624}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/panda22a/panda22a.pdf}, url = {https://proceedings.mlr.press/v151/panda22a.html}, abstract = { Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices. In model poisoning attacks, the attacker reduces the model’s performance on targeted sub-tasks (e.g. classifying planes as birds) by uploading "poisoned" updates. In this paper we introduce SparseFed, a novel defense that uses global top-k update sparsification and device-level gradient clipping to mitigate model poisoning attacks. We propose a theoretical framework for analyzing the robustness of defenses against poisoning attacks, and provide robustness and convergence analysis of our algorithm. To validate its empirical efficacy we conduct an open-source evaluation at scale across multiple benchmark datasets for computer vision and federated learning. } }
Endnote
%0 Conference Paper %T SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification %A Ashwinee Panda %A Saeed Mahloujifar %A Arjun Nitin Bhagoji %A Supriyo Chakraborty %A Prateek Mittal %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-panda22a %I PMLR %P 7587--7624 %U https://proceedings.mlr.press/v151/panda22a.html %V 151 %X Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices. In model poisoning attacks, the attacker reduces the model’s performance on targeted sub-tasks (e.g. classifying planes as birds) by uploading "poisoned" updates. In this paper we introduce SparseFed, a novel defense that uses global top-k update sparsification and device-level gradient clipping to mitigate model poisoning attacks. We propose a theoretical framework for analyzing the robustness of defenses against poisoning attacks, and provide robustness and convergence analysis of our algorithm. To validate its empirical efficacy we conduct an open-source evaluation at scale across multiple benchmark datasets for computer vision and federated learning.
APA
Panda, A., Mahloujifar, S., Nitin Bhagoji, A., Chakraborty, S. & Mittal, P.. (2022). SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7587-7624 Available from https://proceedings.mlr.press/v151/panda22a.html.

Related Material