GuardHFL: Privacy Guardian for Heterogeneous Federated Learning

Hanxiao Chen, Meng Hao, Hongwei Li, Kangjie Chen, Guowen Xu, Tianwei Zhang, Xilin Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:4566-4584, 2023.

Abstract

Heterogeneous federated learning (HFL) enables clients with different computation and communication capabilities to collaboratively train their own customized models via a query-response paradigm on auxiliary datasets. However, such a paradigm raises serious privacy concerns due to the leakage of highly sensitive query samples and response predictions. We put forth GuardHFL, the first-of-its-kind efficient and privacy-preserving HFL framework. GuardHFL is equipped with a novel HFL-friendly secure querying scheme built on lightweight secret sharing and symmetric-key techniques. The core of GuardHFL is two customized multiplication and comparison protocols, which substantially boost the execution efficiency. Extensive evaluations demonstrate that GuardHFL significantly outperforms the alternative instantiations based on existing state-of-the-art techniques in both runtime and communication cost.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-chen23j, title = {{G}uard{HFL}: Privacy Guardian for Heterogeneous Federated Learning}, author = {Chen, Hanxiao and Hao, Meng and Li, Hongwei and Chen, Kangjie and Xu, Guowen and Zhang, Tianwei and Zhang, Xilin}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {4566--4584}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/chen23j/chen23j.pdf}, url = {https://proceedings.mlr.press/v202/chen23j.html}, abstract = {Heterogeneous federated learning (HFL) enables clients with different computation and communication capabilities to collaboratively train their own customized models via a query-response paradigm on auxiliary datasets. However, such a paradigm raises serious privacy concerns due to the leakage of highly sensitive query samples and response predictions. We put forth GuardHFL, the first-of-its-kind efficient and privacy-preserving HFL framework. GuardHFL is equipped with a novel HFL-friendly secure querying scheme built on lightweight secret sharing and symmetric-key techniques. The core of GuardHFL is two customized multiplication and comparison protocols, which substantially boost the execution efficiency. Extensive evaluations demonstrate that GuardHFL significantly outperforms the alternative instantiations based on existing state-of-the-art techniques in both runtime and communication cost.} }
Endnote
%0 Conference Paper %T GuardHFL: Privacy Guardian for Heterogeneous Federated Learning %A Hanxiao Chen %A Meng Hao %A Hongwei Li %A Kangjie Chen %A Guowen Xu %A Tianwei Zhang %A Xilin Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-chen23j %I PMLR %P 4566--4584 %U https://proceedings.mlr.press/v202/chen23j.html %V 202 %X Heterogeneous federated learning (HFL) enables clients with different computation and communication capabilities to collaboratively train their own customized models via a query-response paradigm on auxiliary datasets. However, such a paradigm raises serious privacy concerns due to the leakage of highly sensitive query samples and response predictions. We put forth GuardHFL, the first-of-its-kind efficient and privacy-preserving HFL framework. GuardHFL is equipped with a novel HFL-friendly secure querying scheme built on lightweight secret sharing and symmetric-key techniques. The core of GuardHFL is two customized multiplication and comparison protocols, which substantially boost the execution efficiency. Extensive evaluations demonstrate that GuardHFL significantly outperforms the alternative instantiations based on existing state-of-the-art techniques in both runtime and communication cost.
APA
Chen, H., Hao, M., Li, H., Chen, K., Xu, G., Zhang, T. & Zhang, X.. (2023). GuardHFL: Privacy Guardian for Heterogeneous Federated Learning. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:4566-4584 Available from https://proceedings.mlr.press/v202/chen23j.html.

Related Material