Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models

Quan Minh Nguyen, Minh N. Vu, Truc Nguyen, My T. Thai
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:46211-46241, 2025.

Abstract

Federated Learning (FL) enables collaborative learning among clients via a coordinating server while avoiding direct data sharing, offering a perceived solution to preserve privacy. However, recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data. While local differential privacy (LDP) is widely regarded as a gold standard for privacy protection in data analysis, most studies on MIAs either neglect LDP or fail to provide theoretical guarantees for attack success against LDP-protected data. To address this gap, we derive theoretical lower bounds for the success rates of low-polynomial-time MIAs that exploit vulnerabilities in fully connected or self-attention layers, regardless of the LDP mechanism used. We establish that even when data are protected by LDP, privacy risks persist, depending on the privacy budget. Practical evaluations on models like ResNet and Vision Transformer confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models’ utility.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-nguyen25j, title = {Theoretically Unmasking Inference Attacks Against {LDP}-Protected Clients in Federated Vision Models}, author = {Nguyen, Quan Minh and Vu, Minh N. and Nguyen, Truc and Thai, My T.}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {46211--46241}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/nguyen25j/nguyen25j.pdf}, url = {https://proceedings.mlr.press/v267/nguyen25j.html}, abstract = {Federated Learning (FL) enables collaborative learning among clients via a coordinating server while avoiding direct data sharing, offering a perceived solution to preserve privacy. However, recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data. While local differential privacy (LDP) is widely regarded as a gold standard for privacy protection in data analysis, most studies on MIAs either neglect LDP or fail to provide theoretical guarantees for attack success against LDP-protected data. To address this gap, we derive theoretical lower bounds for the success rates of low-polynomial-time MIAs that exploit vulnerabilities in fully connected or self-attention layers, regardless of the LDP mechanism used. We establish that even when data are protected by LDP, privacy risks persist, depending on the privacy budget. Practical evaluations on models like ResNet and Vision Transformer confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models’ utility.} }
Endnote
%0 Conference Paper %T Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models %A Quan Minh Nguyen %A Minh N. Vu %A Truc Nguyen %A My T. Thai %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-nguyen25j %I PMLR %P 46211--46241 %U https://proceedings.mlr.press/v267/nguyen25j.html %V 267 %X Federated Learning (FL) enables collaborative learning among clients via a coordinating server while avoiding direct data sharing, offering a perceived solution to preserve privacy. However, recent studies on Membership Inference Attacks (MIAs) have challenged this notion, showing high success rates against unprotected training data. While local differential privacy (LDP) is widely regarded as a gold standard for privacy protection in data analysis, most studies on MIAs either neglect LDP or fail to provide theoretical guarantees for attack success against LDP-protected data. To address this gap, we derive theoretical lower bounds for the success rates of low-polynomial-time MIAs that exploit vulnerabilities in fully connected or self-attention layers, regardless of the LDP mechanism used. We establish that even when data are protected by LDP, privacy risks persist, depending on the privacy budget. Practical evaluations on models like ResNet and Vision Transformer confirm considerable privacy risks, revealing that the noise required to mitigate these attacks significantly degrades models’ utility.
APA
Nguyen, Q.M., Vu, M.N., Nguyen, T. & Thai, M.T.. (2025). Theoretically Unmasking Inference Attacks Against LDP-Protected Clients in Federated Vision Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:46211-46241 Available from https://proceedings.mlr.press/v267/nguyen25j.html.

Related Material