CRFL: Certifiably Robust Federated Learning against Backdoor Attacks

Chulin Xie, Minghao Chen, Pin-Yu Chen, Bo Li
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11372-11382, 2021.

Abstract

Federated Learning (FL) as a distributed learning paradigm that aggregates information from diverse clients to train a shared global model, has demonstrated great success. However, malicious clients can perform poisoning attacks and model replacement to introduce backdoors into the trained global model. Although there have been intensive studies designing robust aggregation methods and empirical robust federated training protocols against backdoors, existing approaches lack robustness certification. This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors. Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude. Our certification also specifies the relation to federated learning parameters, such as poisoning ratio on instance level, number of attackers, and training iterations. Practically, we conduct comprehensive experiments across a range of federated datasets, and provide the first benchmark for certified robustness against backdoor attacks in federated learning. Our code is publicaly available at https://github.com/AI-secure/CRFL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-xie21a, title = {CRFL: Certifiably Robust Federated Learning against Backdoor Attacks}, author = {Xie, Chulin and Chen, Minghao and Chen, Pin-Yu and Li, Bo}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11372--11382}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/xie21a/xie21a.pdf}, url = {https://proceedings.mlr.press/v139/xie21a.html}, abstract = {Federated Learning (FL) as a distributed learning paradigm that aggregates information from diverse clients to train a shared global model, has demonstrated great success. However, malicious clients can perform poisoning attacks and model replacement to introduce backdoors into the trained global model. Although there have been intensive studies designing robust aggregation methods and empirical robust federated training protocols against backdoors, existing approaches lack robustness certification. This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors. Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude. Our certification also specifies the relation to federated learning parameters, such as poisoning ratio on instance level, number of attackers, and training iterations. Practically, we conduct comprehensive experiments across a range of federated datasets, and provide the first benchmark for certified robustness against backdoor attacks in federated learning. Our code is publicaly available at https://github.com/AI-secure/CRFL.} }
Endnote
%0 Conference Paper %T CRFL: Certifiably Robust Federated Learning against Backdoor Attacks %A Chulin Xie %A Minghao Chen %A Pin-Yu Chen %A Bo Li %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-xie21a %I PMLR %P 11372--11382 %U https://proceedings.mlr.press/v139/xie21a.html %V 139 %X Federated Learning (FL) as a distributed learning paradigm that aggregates information from diverse clients to train a shared global model, has demonstrated great success. However, malicious clients can perform poisoning attacks and model replacement to introduce backdoors into the trained global model. Although there have been intensive studies designing robust aggregation methods and empirical robust federated training protocols against backdoors, existing approaches lack robustness certification. This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors. Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude. Our certification also specifies the relation to federated learning parameters, such as poisoning ratio on instance level, number of attackers, and training iterations. Practically, we conduct comprehensive experiments across a range of federated datasets, and provide the first benchmark for certified robustness against backdoor attacks in federated learning. Our code is publicaly available at https://github.com/AI-secure/CRFL.
APA
Xie, C., Chen, M., Chen, P. & Li, B.. (2021). CRFL: Certifiably Robust Federated Learning against Backdoor Attacks. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11372-11382 Available from https://proceedings.mlr.press/v139/xie21a.html.

Related Material