SASH: Efficient secure aggregation based on SHPRG for federated learning

Zizhen Liu, Si Chen, Jing Ye, Junfeng Fan, Huawei Li, Xiaowei Li
Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:1243-1252, 2022.

Abstract

To prevent private training data leakage in Federated Learning systems, we propose a novel secure aggregation scheme based on seed homomorphic pseudo-random generator (SHPRG), named SASH. SASH leverages the homomorphic property of SHPRG to simplify the masking and demasking scheme, which for each of the clients and for the server, entails a overhead linear w.r.t model size and constant w.r.t number of clients. We prove that even against worst-case colluding adversaries, SASH preserves training data privacy, while being resilient to dropouts without extra overhead. We experimentally demonstrate SASH significantly improves the efficiency to 20× over baseline, especially in the more realistic case where the numbers of clients and model size become large, and a certain percentage of clients drop out from the system.

Cite this Paper


BibTeX
@InProceedings{pmlr-v180-liu22c, title = {SASH: Efficient secure aggregation based on SHPRG for federated learning}, author = {Liu, Zizhen and Chen, Si and Ye, Jing and Fan, Junfeng and Li, Huawei and Li, Xiaowei}, booktitle = {Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence}, pages = {1243--1252}, year = {2022}, editor = {Cussens, James and Zhang, Kun}, volume = {180}, series = {Proceedings of Machine Learning Research}, month = {01--05 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v180/liu22c/liu22c.pdf}, url = {https://proceedings.mlr.press/v180/liu22c.html}, abstract = {To prevent private training data leakage in Federated Learning systems, we propose a novel secure aggregation scheme based on seed homomorphic pseudo-random generator (SHPRG), named SASH. SASH leverages the homomorphic property of SHPRG to simplify the masking and demasking scheme, which for each of the clients and for the server, entails a overhead linear w.r.t model size and constant w.r.t number of clients. We prove that even against worst-case colluding adversaries, SASH preserves training data privacy, while being resilient to dropouts without extra overhead. We experimentally demonstrate SASH significantly improves the efficiency to 20× over baseline, especially in the more realistic case where the numbers of clients and model size become large, and a certain percentage of clients drop out from the system.} }
Endnote
%0 Conference Paper %T SASH: Efficient secure aggregation based on SHPRG for federated learning %A Zizhen Liu %A Si Chen %A Jing Ye %A Junfeng Fan %A Huawei Li %A Xiaowei Li %B Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2022 %E James Cussens %E Kun Zhang %F pmlr-v180-liu22c %I PMLR %P 1243--1252 %U https://proceedings.mlr.press/v180/liu22c.html %V 180 %X To prevent private training data leakage in Federated Learning systems, we propose a novel secure aggregation scheme based on seed homomorphic pseudo-random generator (SHPRG), named SASH. SASH leverages the homomorphic property of SHPRG to simplify the masking and demasking scheme, which for each of the clients and for the server, entails a overhead linear w.r.t model size and constant w.r.t number of clients. We prove that even against worst-case colluding adversaries, SASH preserves training data privacy, while being resilient to dropouts without extra overhead. We experimentally demonstrate SASH significantly improves the efficiency to 20× over baseline, especially in the more realistic case where the numbers of clients and model size become large, and a certain percentage of clients drop out from the system.
APA
Liu, Z., Chen, S., Ye, J., Fan, J., Li, H. & Li, X.. (2022). SASH: Efficient secure aggregation based on SHPRG for federated learning. Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 180:1243-1252 Available from https://proceedings.mlr.press/v180/liu22c.html.

Related Material