Computing Tight Differential Privacy Guarantees Using FFT

Antti Koskela, Joonas Jälkö, Antti Honkela
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:2560-2569, 2020.

Abstract

Differentially private (DP) machine learning has recently become popular. The privacy loss of DP algorithms is commonly reported using (e.d)-DP. In this paper, we propose a numerical accountant for evaluating the privacy loss for algorithms with continuous one dimensional output. This accountant can be applied to the subsampled multidimensional Gaussian mechanism which underlies the popular DP stochastic gradient descent. The proposed method is based on a numerical approximation of an integral formula which gives the exact (e.d)-values. The approximation is carried out by discretising the integral and by evaluating discrete convolutions using the fast Fourier transform algorithm. We give theoretical error bounds which show the convergence of the approximation and guarantee its accuracy to an arbitrary degree. We give both theoretical error bounds and numerical error estimates for the approximation. Experimental comparisons with state-of-the-art techniques demonstrate significant improvements in bound tightness and/or computation time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-koskela20b, title = {Computing Tight Differential Privacy Guarantees Using FFT}, author = {Koskela, Antti and J\"alk\"o, Joonas and Honkela, Antti}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {2560--2569}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/koskela20b/koskela20b.pdf}, url = {https://proceedings.mlr.press/v108/koskela20b.html}, abstract = {Differentially private (DP) machine learning has recently become popular. The privacy loss of DP algorithms is commonly reported using (e.d)-DP. In this paper, we propose a numerical accountant for evaluating the privacy loss for algorithms with continuous one dimensional output. This accountant can be applied to the subsampled multidimensional Gaussian mechanism which underlies the popular DP stochastic gradient descent. The proposed method is based on a numerical approximation of an integral formula which gives the exact (e.d)-values. The approximation is carried out by discretising the integral and by evaluating discrete convolutions using the fast Fourier transform algorithm. We give theoretical error bounds which show the convergence of the approximation and guarantee its accuracy to an arbitrary degree. We give both theoretical error bounds and numerical error estimates for the approximation. Experimental comparisons with state-of-the-art techniques demonstrate significant improvements in bound tightness and/or computation time.} }
Endnote
%0 Conference Paper %T Computing Tight Differential Privacy Guarantees Using FFT %A Antti Koskela %A Joonas Jälkö %A Antti Honkela %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-koskela20b %I PMLR %P 2560--2569 %U https://proceedings.mlr.press/v108/koskela20b.html %V 108 %X Differentially private (DP) machine learning has recently become popular. The privacy loss of DP algorithms is commonly reported using (e.d)-DP. In this paper, we propose a numerical accountant for evaluating the privacy loss for algorithms with continuous one dimensional output. This accountant can be applied to the subsampled multidimensional Gaussian mechanism which underlies the popular DP stochastic gradient descent. The proposed method is based on a numerical approximation of an integral formula which gives the exact (e.d)-values. The approximation is carried out by discretising the integral and by evaluating discrete convolutions using the fast Fourier transform algorithm. We give theoretical error bounds which show the convergence of the approximation and guarantee its accuracy to an arbitrary degree. We give both theoretical error bounds and numerical error estimates for the approximation. Experimental comparisons with state-of-the-art techniques demonstrate significant improvements in bound tightness and/or computation time.
APA
Koskela, A., Jälkö, J. & Honkela, A.. (2020). Computing Tight Differential Privacy Guarantees Using FFT. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:2560-2569 Available from https://proceedings.mlr.press/v108/koskela20b.html.

Related Material