Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design

Chuan Guo, Kamalika Chaudhuri, Pierre Stock, Michael Rabbat
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:11888-11904, 2023.

Abstract

In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model. The main challenge in this setting is balancing privacy with both classification accuracy of the learnt model as well as the number of bits communicated between the clients and server. Prior work has achieved a good trade-off by designing a privacy-aware compression mechanism, called the minimum variance unbiased (MVU) mechanism, that numerically solves an optimization problem to determine the parameters of the mechanism. This paper builds upon it by introducing a new interpolation procedure in the numerical design process that allows for a far more efficient privacy analysis. The result is the new Interpolated MVU mechanism that is more scalable, has a better privacy-utility trade-off, and provides SOTA results on communication-efficient private FL on a variety of datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-guo23a, title = {Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design}, author = {Guo, Chuan and Chaudhuri, Kamalika and Stock, Pierre and Rabbat, Michael}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {11888--11904}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/guo23a/guo23a.pdf}, url = {https://proceedings.mlr.press/v202/guo23a.html}, abstract = {In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model. The main challenge in this setting is balancing privacy with both classification accuracy of the learnt model as well as the number of bits communicated between the clients and server. Prior work has achieved a good trade-off by designing a privacy-aware compression mechanism, called the minimum variance unbiased (MVU) mechanism, that numerically solves an optimization problem to determine the parameters of the mechanism. This paper builds upon it by introducing a new interpolation procedure in the numerical design process that allows for a far more efficient privacy analysis. The result is the new Interpolated MVU mechanism that is more scalable, has a better privacy-utility trade-off, and provides SOTA results on communication-efficient private FL on a variety of datasets.} }
Endnote
%0 Conference Paper %T Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design %A Chuan Guo %A Kamalika Chaudhuri %A Pierre Stock %A Michael Rabbat %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-guo23a %I PMLR %P 11888--11904 %U https://proceedings.mlr.press/v202/guo23a.html %V 202 %X In private federated learning (FL), a server aggregates differentially private updates from a large number of clients in order to train a machine learning model. The main challenge in this setting is balancing privacy with both classification accuracy of the learnt model as well as the number of bits communicated between the clients and server. Prior work has achieved a good trade-off by designing a privacy-aware compression mechanism, called the minimum variance unbiased (MVU) mechanism, that numerically solves an optimization problem to determine the parameters of the mechanism. This paper builds upon it by introducing a new interpolation procedure in the numerical design process that allows for a far more efficient privacy analysis. The result is the new Interpolated MVU mechanism that is more scalable, has a better privacy-utility trade-off, and provides SOTA results on communication-efficient private FL on a variety of datasets.
APA
Guo, C., Chaudhuri, K., Stock, P. & Rabbat, M.. (2023). Privacy-Aware Compression for Federated Learning Through Numerical Mechanism Design. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:11888-11904 Available from https://proceedings.mlr.press/v202/guo23a.html.

Related Material