Deep Leakage from Model in Federated Learning

Zihao Zhao, Mengen Luo, Wenbo Ding
Conference on Parsimony and Learning, PMLR 234:324-340, 2024.

Abstract

Federated Learning (FL) was conceived as a secure form of distributed learning by keeping private training data local and only communicating public model gradients between clients. However, a slew of gradient leakage attacks proposed to date undermine this claim by proving its insecurity. A common limitation of these attacks is the necessity for extensive auxiliary information, such as model weights, optimizers, and certain hyperparameters (e.g., learning rate), which are challenging to acquire in practical scenarios. Furthermore, several existing algorithms, including FedAvg, circumvent the transmission of model gradients in FL by instead sending model weights, but the potential security breaches of this approach are seldom considered. In this paper, we propose two innovative frameworks, DLM and DLM+, that reveal the potential leakage of private local data of clients when transmitting model weights under the FL framework. We also conduct a series of experiments to elucidate the impact and universality of our attack frameworks. Additionally, we propose and evaluate two defenses against the proposed attacks, assessing their protective efficacy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v234-zhao24b, title = {Deep Leakage from Model in Federated Learning}, author = {Zhao, Zihao and Luo, Mengen and Ding, Wenbo}, booktitle = {Conference on Parsimony and Learning}, pages = {324--340}, year = {2024}, editor = {Chi, Yuejie and Dziugaite, Gintare Karolina and Qu, Qing and Wang, Atlas Wang and Zhu, Zhihui}, volume = {234}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v234/zhao24b/zhao24b.pdf}, url = {https://proceedings.mlr.press/v234/zhao24b.html}, abstract = {Federated Learning (FL) was conceived as a secure form of distributed learning by keeping private training data local and only communicating public model gradients between clients. However, a slew of gradient leakage attacks proposed to date undermine this claim by proving its insecurity. A common limitation of these attacks is the necessity for extensive auxiliary information, such as model weights, optimizers, and certain hyperparameters (e.g., learning rate), which are challenging to acquire in practical scenarios. Furthermore, several existing algorithms, including FedAvg, circumvent the transmission of model gradients in FL by instead sending model weights, but the potential security breaches of this approach are seldom considered. In this paper, we propose two innovative frameworks, DLM and DLM+, that reveal the potential leakage of private local data of clients when transmitting model weights under the FL framework. We also conduct a series of experiments to elucidate the impact and universality of our attack frameworks. Additionally, we propose and evaluate two defenses against the proposed attacks, assessing their protective efficacy.} }
Endnote
%0 Conference Paper %T Deep Leakage from Model in Federated Learning %A Zihao Zhao %A Mengen Luo %A Wenbo Ding %B Conference on Parsimony and Learning %C Proceedings of Machine Learning Research %D 2024 %E Yuejie Chi %E Gintare Karolina Dziugaite %E Qing Qu %E Atlas Wang Wang %E Zhihui Zhu %F pmlr-v234-zhao24b %I PMLR %P 324--340 %U https://proceedings.mlr.press/v234/zhao24b.html %V 234 %X Federated Learning (FL) was conceived as a secure form of distributed learning by keeping private training data local and only communicating public model gradients between clients. However, a slew of gradient leakage attacks proposed to date undermine this claim by proving its insecurity. A common limitation of these attacks is the necessity for extensive auxiliary information, such as model weights, optimizers, and certain hyperparameters (e.g., learning rate), which are challenging to acquire in practical scenarios. Furthermore, several existing algorithms, including FedAvg, circumvent the transmission of model gradients in FL by instead sending model weights, but the potential security breaches of this approach are seldom considered. In this paper, we propose two innovative frameworks, DLM and DLM+, that reveal the potential leakage of private local data of clients when transmitting model weights under the FL framework. We also conduct a series of experiments to elucidate the impact and universality of our attack frameworks. Additionally, we propose and evaluate two defenses against the proposed attacks, assessing their protective efficacy.
APA
Zhao, Z., Luo, M. & Ding, W.. (2024). Deep Leakage from Model in Federated Learning. Conference on Parsimony and Learning, in Proceedings of Machine Learning Research 234:324-340 Available from https://proceedings.mlr.press/v234/zhao24b.html.

Related Material