Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion

Qinqing Zheng, Jinshuo Dong, Qi Long, Weijie Su
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11420-11435, 2020.

Abstract

Datasets containing sensitive information are often sequentially analyzed by many algorithms and, accordingly, a fundamental question in differential privacy is concerned with how the overall privacy bound degrades under composition. To address this question, we introduce a family of analytical and sharp privacy bounds under composition using the Edgeworth expansion in the framework of the recently proposed $f$-differential privacy. In short, whereas the existing composition theorem, for example, relies on the central limit theorem, our new privacy bounds under composition gain improved tightness by leveraging the refined approximation accuracy of the Edgeworth expansion. Our approach is easy to implement and computationally efficient for any number of compositions. The superiority of these new bounds is confirmed by an asymptotic error analysis and an application to quantifying the overall privacy guarantees of noisy stochastic gradient descent used in training private deep neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zheng20a, title = {Sharp Composition Bounds for {G}aussian Differential Privacy via Edgeworth Expansion}, author = {Zheng, Qinqing and Dong, Jinshuo and Long, Qi and Su, Weijie}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11420--11435}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zheng20a/zheng20a.pdf}, url = {https://proceedings.mlr.press/v119/zheng20a.html}, abstract = {Datasets containing sensitive information are often sequentially analyzed by many algorithms and, accordingly, a fundamental question in differential privacy is concerned with how the overall privacy bound degrades under composition. To address this question, we introduce a family of analytical and sharp privacy bounds under composition using the Edgeworth expansion in the framework of the recently proposed $f$-differential privacy. In short, whereas the existing composition theorem, for example, relies on the central limit theorem, our new privacy bounds under composition gain improved tightness by leveraging the refined approximation accuracy of the Edgeworth expansion. Our approach is easy to implement and computationally efficient for any number of compositions. The superiority of these new bounds is confirmed by an asymptotic error analysis and an application to quantifying the overall privacy guarantees of noisy stochastic gradient descent used in training private deep neural networks.} }
Endnote
%0 Conference Paper %T Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion %A Qinqing Zheng %A Jinshuo Dong %A Qi Long %A Weijie Su %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zheng20a %I PMLR %P 11420--11435 %U https://proceedings.mlr.press/v119/zheng20a.html %V 119 %X Datasets containing sensitive information are often sequentially analyzed by many algorithms and, accordingly, a fundamental question in differential privacy is concerned with how the overall privacy bound degrades under composition. To address this question, we introduce a family of analytical and sharp privacy bounds under composition using the Edgeworth expansion in the framework of the recently proposed $f$-differential privacy. In short, whereas the existing composition theorem, for example, relies on the central limit theorem, our new privacy bounds under composition gain improved tightness by leveraging the refined approximation accuracy of the Edgeworth expansion. Our approach is easy to implement and computationally efficient for any number of compositions. The superiority of these new bounds is confirmed by an asymptotic error analysis and an application to quantifying the overall privacy guarantees of noisy stochastic gradient descent used in training private deep neural networks.
APA
Zheng, Q., Dong, J., Long, Q. & Su, W.. (2020). Sharp Composition Bounds for Gaussian Differential Privacy via Edgeworth Expansion. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11420-11435 Available from https://proceedings.mlr.press/v119/zheng20a.html.

Related Material