Active Membership Inference Attack under Local Differential Privacy in Federated Learning

Truc Nguyen, Phung Lai, Khang Tran, NhatHai Phan, My T. Thai
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:5714-5730, 2023.

Abstract

Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server. In this paper, we propose a new active membership inference (AMI) attack carried out by a dishonest server in FL. In AMI attacks, the server crafts and embeds malicious parameters into global models to effectively infer whether a target data sample is included in a client’s private training data or not. By exploiting the correlation among data features through a non-linear decision boundary, AMI attacks with a certified guarantee of success can achieve severely high success rates under rigorous local differential privacy (LDP) protection; thereby exposing clients’ training data to significant privacy risk. Theoretical and experimental results on several benchmark datasets show that adding sufficient privacy-preserving noise to prevent our attack would significantly damage FL’s model utility.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-nguyen23e, title = {Active Membership Inference Attack under Local Differential Privacy in Federated Learning}, author = {Nguyen, Truc and Lai, Phung and Tran, Khang and Phan, NhatHai and Thai, My T.}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {5714--5730}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/nguyen23e/nguyen23e.pdf}, url = {https://proceedings.mlr.press/v206/nguyen23e.html}, abstract = {Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server. In this paper, we propose a new active membership inference (AMI) attack carried out by a dishonest server in FL. In AMI attacks, the server crafts and embeds malicious parameters into global models to effectively infer whether a target data sample is included in a client’s private training data or not. By exploiting the correlation among data features through a non-linear decision boundary, AMI attacks with a certified guarantee of success can achieve severely high success rates under rigorous local differential privacy (LDP) protection; thereby exposing clients’ training data to significant privacy risk. Theoretical and experimental results on several benchmark datasets show that adding sufficient privacy-preserving noise to prevent our attack would significantly damage FL’s model utility.} }
Endnote
%0 Conference Paper %T Active Membership Inference Attack under Local Differential Privacy in Federated Learning %A Truc Nguyen %A Phung Lai %A Khang Tran %A NhatHai Phan %A My T. Thai %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-nguyen23e %I PMLR %P 5714--5730 %U https://proceedings.mlr.press/v206/nguyen23e.html %V 206 %X Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection through a coordinating server. In this paper, we propose a new active membership inference (AMI) attack carried out by a dishonest server in FL. In AMI attacks, the server crafts and embeds malicious parameters into global models to effectively infer whether a target data sample is included in a client’s private training data or not. By exploiting the correlation among data features through a non-linear decision boundary, AMI attacks with a certified guarantee of success can achieve severely high success rates under rigorous local differential privacy (LDP) protection; thereby exposing clients’ training data to significant privacy risk. Theoretical and experimental results on several benchmark datasets show that adding sufficient privacy-preserving noise to prevent our attack would significantly damage FL’s model utility.
APA
Nguyen, T., Lai, P., Tran, K., Phan, N. & Thai, M.T.. (2023). Active Membership Inference Attack under Local Differential Privacy in Federated Learning. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:5714-5730 Available from https://proceedings.mlr.press/v206/nguyen23e.html.

Related Material