SRATTA: Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning.

Tanguy Marchand, Regis Loeb, Ulysse Marteau-Ferey, Jean Ogier Du Terrail, Arthur Pignet
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:23886-23914, 2023.

Abstract

We consider a federated learning (FL) setting where a machine learning model with a fully connected first layer is trained between different clients and a central server using FedAvg, and where the aggregation step can be performed with secure aggregation (SA). We present SRATTA an attack relying only on aggregated models which, under realistic assumptions, (i) recovers data samples from the different clients, and (ii) groups data samples coming from the same client together. While sample recovery has already been explored in an FL setting, the ability to group samples per client, despite the use of SA, is novel. This poses a significant unforeseen security threat to FL and effectively breaks SA. We show that SRATTA is both theoretically grounded and can be used in practice on realistic models and datasets. We also propose counter-measures, and claim that clients should play an active role to guarantee their privacy during training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-marchand23a, title = {{SRATTA}: Sample Re-{ATT}ribution Attack of Secure Aggregation in Federated Learning.}, author = {Marchand, Tanguy and Loeb, Regis and Marteau-Ferey, Ulysse and Ogier Du Terrail, Jean and Pignet, Arthur}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {23886--23914}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/marchand23a/marchand23a.pdf}, url = {https://proceedings.mlr.press/v202/marchand23a.html}, abstract = {We consider a federated learning (FL) setting where a machine learning model with a fully connected first layer is trained between different clients and a central server using FedAvg, and where the aggregation step can be performed with secure aggregation (SA). We present SRATTA an attack relying only on aggregated models which, under realistic assumptions, (i) recovers data samples from the different clients, and (ii) groups data samples coming from the same client together. While sample recovery has already been explored in an FL setting, the ability to group samples per client, despite the use of SA, is novel. This poses a significant unforeseen security threat to FL and effectively breaks SA. We show that SRATTA is both theoretically grounded and can be used in practice on realistic models and datasets. We also propose counter-measures, and claim that clients should play an active role to guarantee their privacy during training.} }
Endnote
%0 Conference Paper %T SRATTA: Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning. %A Tanguy Marchand %A Regis Loeb %A Ulysse Marteau-Ferey %A Jean Ogier Du Terrail %A Arthur Pignet %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-marchand23a %I PMLR %P 23886--23914 %U https://proceedings.mlr.press/v202/marchand23a.html %V 202 %X We consider a federated learning (FL) setting where a machine learning model with a fully connected first layer is trained between different clients and a central server using FedAvg, and where the aggregation step can be performed with secure aggregation (SA). We present SRATTA an attack relying only on aggregated models which, under realistic assumptions, (i) recovers data samples from the different clients, and (ii) groups data samples coming from the same client together. While sample recovery has already been explored in an FL setting, the ability to group samples per client, despite the use of SA, is novel. This poses a significant unforeseen security threat to FL and effectively breaks SA. We show that SRATTA is both theoretically grounded and can be used in practice on realistic models and datasets. We also propose counter-measures, and claim that clients should play an active role to guarantee their privacy during training.
APA
Marchand, T., Loeb, R., Marteau-Ferey, U., Ogier Du Terrail, J. & Pignet, A.. (2023). SRATTA: Sample Re-ATTribution Attack of Secure Aggregation in Federated Learning.. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:23886-23914 Available from https://proceedings.mlr.press/v202/marchand23a.html.

Related Material