[edit]
Bounding, Concentrating, and Truncating: Unifying Privacy Loss Composition for Data Analytics
Proceedings of the 32nd International Conference on Algorithmic Learning Theory, PMLR 132:421-457, 2021.
Abstract
We unify existing privacy loss composition bounds for special classes of differentially private (DP) algorithms along with general DP composition bounds. In particular, we provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms), or concentrated DP mechanisms in any adaptively selected order. We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch, i.e. non-adaptively. Further, when an analyst selects mechanisms within each class adaptively, we show a difference in privacy loss between different, predetermined orderings of pure DP and bounded range mechanisms. Lastly, we compare the composition bounds of Laplace and Gaussian mechanisms and provide new private mechanisms for top-$k$ using truncated Gaussian noise.