Infinitely Divisible Noise in the Low Privacy Regime

Rasmus Pagh, Nina Mesing Stausholm
Proceedings of The 33rd International Conference on Algorithmic Learning Theory, PMLR 167:881-909, 2022.

Abstract

Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for $\varepsilon$-differential privacy, where $\varepsilon > 1$ is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves $\varepsilon$-differential privacy and has expected error that decreases exponentially with $\varepsilon$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v167-pagh22a, title = {Infinitely Divisible Noise in the Low Privacy Regime}, author = {Pagh, Rasmus and Stausholm, Nina Mesing}, booktitle = {Proceedings of The 33rd International Conference on Algorithmic Learning Theory}, pages = {881--909}, year = {2022}, editor = {Dasgupta, Sanjoy and Haghtalab, Nika}, volume = {167}, series = {Proceedings of Machine Learning Research}, month = {29 Mar--01 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v167/pagh22a/pagh22a.pdf}, url = {https://proceedings.mlr.press/v167/pagh22a.html}, abstract = {Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for $\varepsilon$-differential privacy, where $\varepsilon > 1$ is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves $\varepsilon$-differential privacy and has expected error that decreases exponentially with $\varepsilon$.} }
Endnote
%0 Conference Paper %T Infinitely Divisible Noise in the Low Privacy Regime %A Rasmus Pagh %A Nina Mesing Stausholm %B Proceedings of The 33rd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Sanjoy Dasgupta %E Nika Haghtalab %F pmlr-v167-pagh22a %I PMLR %P 881--909 %U https://proceedings.mlr.press/v167/pagh22a.html %V 167 %X Federated learning, in which training data is distributed among users and never shared, has emerged as a popular approach to privacy-preserving machine learning. Cryptographic techniques such as secure aggregation are used to aggregate contributions, like a model update, from all users. A robust technique for making such aggregates differentially private is to exploit \emph{infinite divisibility} of the Laplace distribution, namely, that a Laplace distribution can be expressed as a sum of i.i.d. noise shares from a Gamma distribution, one share added by each user. However, Laplace noise is known to have suboptimal error in the low privacy regime for $\varepsilon$-differential privacy, where $\varepsilon > 1$ is a large constant. In this paper we present the first infinitely divisible noise distribution for real-valued data that achieves $\varepsilon$-differential privacy and has expected error that decreases exponentially with $\varepsilon$.
APA
Pagh, R. & Stausholm, N.M.. (2022). Infinitely Divisible Noise in the Low Privacy Regime. Proceedings of The 33rd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 167:881-909 Available from https://proceedings.mlr.press/v167/pagh22a.html.

Related Material