Compression with Exact Error Distribution for Federated Learning

Mahmoud Hegazy, Rémi Leluc, Cheuk Ting Li, Aymeric Dieuleveut
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:613-621, 2024.

Abstract

Compression schemes have been extensively used in Federated Learning (FL) to reduce the communication cost of distributed learning. While most approaches rely on a bounded variance assumption of the noise produced by the compressor, this paper investigates the use of compression and aggregation schemes that produce a specific error distribution, e.g., Gaussian or Laplace, on the aggregated data. We present and analyze different aggregation schemes based on layered quantizers achieving exact error distribution. We provide different methods to leverage the proposed compression schemes to obtain compression-for-free in differential privacy applications. Our general compression methods can recover and improve standard FL schemes with Gaussian perturbations such as Langevin dynamics and randomized smoothing.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-hegazy24a, title = { Compression with Exact Error Distribution for Federated Learning }, author = {Hegazy, Mahmoud and Leluc, R\'{e}mi and Ting Li, Cheuk and Dieuleveut, Aymeric}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {613--621}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/hegazy24a/hegazy24a.pdf}, url = {https://proceedings.mlr.press/v238/hegazy24a.html}, abstract = { Compression schemes have been extensively used in Federated Learning (FL) to reduce the communication cost of distributed learning. While most approaches rely on a bounded variance assumption of the noise produced by the compressor, this paper investigates the use of compression and aggregation schemes that produce a specific error distribution, e.g., Gaussian or Laplace, on the aggregated data. We present and analyze different aggregation schemes based on layered quantizers achieving exact error distribution. We provide different methods to leverage the proposed compression schemes to obtain compression-for-free in differential privacy applications. Our general compression methods can recover and improve standard FL schemes with Gaussian perturbations such as Langevin dynamics and randomized smoothing. } }
Endnote
%0 Conference Paper %T Compression with Exact Error Distribution for Federated Learning %A Mahmoud Hegazy %A Rémi Leluc %A Cheuk Ting Li %A Aymeric Dieuleveut %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-hegazy24a %I PMLR %P 613--621 %U https://proceedings.mlr.press/v238/hegazy24a.html %V 238 %X Compression schemes have been extensively used in Federated Learning (FL) to reduce the communication cost of distributed learning. While most approaches rely on a bounded variance assumption of the noise produced by the compressor, this paper investigates the use of compression and aggregation schemes that produce a specific error distribution, e.g., Gaussian or Laplace, on the aggregated data. We present and analyze different aggregation schemes based on layered quantizers achieving exact error distribution. We provide different methods to leverage the proposed compression schemes to obtain compression-for-free in differential privacy applications. Our general compression methods can recover and improve standard FL schemes with Gaussian perturbations such as Langevin dynamics and randomized smoothing.
APA
Hegazy, M., Leluc, R., Ting Li, C. & Dieuleveut, A.. (2024). Compression with Exact Error Distribution for Federated Learning . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:613-621 Available from https://proceedings.mlr.press/v238/hegazy24a.html.

Related Material