Bounding, Concentrating, and Truncating: Unifying Privacy Loss Composition for Data Analytics

Mark Cesar, Ryan Rogers
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:421-457, 2021.

Abstract

We unify existing privacy loss composition bounds for special classes of differentially private (DP) algorithms along with general DP composition bounds. In particular, we provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms), or concentrated DP mechanisms in any adaptively selected order. We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch, i.e. non-adaptively. Further, when an analyst selects mechanisms within each class adaptively, we show a difference in privacy loss between different, predetermined orderings of pure DP and bounded range mechanisms. Lastly, we compare the composition bounds of Laplace and Gaussian mechanisms and provide new private mechanisms for top-$k$ using truncated Gaussian noise.

Cite this Paper


BibTeX
@InProceedings{pmlr-v132-cesar21a, title = {Bounding, Concentrating, and Truncating: Unifying Privacy Loss Composition for Data Analytics}, author = {Cesar, Mark and Rogers, Ryan}, booktitle = {Proceedings of the 32nd International Conference on Algorithmic Learning Theory}, pages = {421--457}, year = {2021}, editor = {Feldman, Vitaly and Ligett, Katrina and Sabato, Sivan}, volume = {132}, series = {Proceedings of Machine Learning Research}, month = {16--19 Mar}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v132/cesar21a/cesar21a.pdf}, url = {https://proceedings.mlr.press/v132/cesar21a.html}, abstract = {We unify existing privacy loss composition bounds for special classes of differentially private (DP) algorithms along with general DP composition bounds. In particular, we provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms), or concentrated DP mechanisms in any adaptively selected order. We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch, i.e. non-adaptively. Further, when an analyst selects mechanisms within each class adaptively, we show a difference in privacy loss between different, predetermined orderings of pure DP and bounded range mechanisms. Lastly, we compare the composition bounds of Laplace and Gaussian mechanisms and provide new private mechanisms for top-$k$ using truncated Gaussian noise.} }
Endnote
%0 Conference Paper %T Bounding, Concentrating, and Truncating: Unifying Privacy Loss Composition for Data Analytics %A Mark Cesar %A Ryan Rogers %B Proceedings of the 32nd International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Vitaly Feldman %E Katrina Ligett %E Sivan Sabato %F pmlr-v132-cesar21a %I PMLR %P 421--457 %U https://proceedings.mlr.press/v132/cesar21a.html %V 132 %X We unify existing privacy loss composition bounds for special classes of differentially private (DP) algorithms along with general DP composition bounds. In particular, we provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms), or concentrated DP mechanisms in any adaptively selected order. We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch, i.e. non-adaptively. Further, when an analyst selects mechanisms within each class adaptively, we show a difference in privacy loss between different, predetermined orderings of pure DP and bounded range mechanisms. Lastly, we compare the composition bounds of Laplace and Gaussian mechanisms and provide new private mechanisms for top-$k$ using truncated Gaussian noise.
APA
Cesar, M. & Rogers, R.. (2021). Bounding, Concentrating, and Truncating: Unifying Privacy Loss Composition for Data Analytics. Proceedings of the 32nd International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 132:421-457 Available from https://proceedings.mlr.press/v132/cesar21a.html.

Related Material