Mitigating statistical bias within differentially private synthetic data

Sahra Ghalebikesabi, Harry Wilde, Jack Jewson, Arnaud Doucet, Sebastian Vollmer, Chris Holmes
Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, PMLR 180:696-705, 2022.

Abstract

Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data. However, mechanisms of privacy preservation can significantly reduce the utility of synthetic data, which in turn impacts downstream tasks such as learning predictive models or inference. We propose several re-weighting strategies using privatised likelihood ratios that not only mitigate statistical bias of downstream estimators but also have general applicability to differentially private generative models. Through large-scale empirical evaluation, we show that private importance weighting provides simple and effective privacy-compliant augmentation for general applications of synthetic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v180-ghalebikesabi22a, title = {Mitigating statistical bias within differentially private synthetic data}, author = {Ghalebikesabi, Sahra and Wilde, Harry and Jewson, Jack and Doucet, Arnaud and Vollmer, Sebastian and Holmes, Chris}, booktitle = {Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence}, pages = {696--705}, year = {2022}, editor = {Cussens, James and Zhang, Kun}, volume = {180}, series = {Proceedings of Machine Learning Research}, month = {01--05 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v180/ghalebikesabi22a/ghalebikesabi22a.pdf}, url = {https://proceedings.mlr.press/v180/ghalebikesabi22a.html}, abstract = {Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data. However, mechanisms of privacy preservation can significantly reduce the utility of synthetic data, which in turn impacts downstream tasks such as learning predictive models or inference. We propose several re-weighting strategies using privatised likelihood ratios that not only mitigate statistical bias of downstream estimators but also have general applicability to differentially private generative models. Through large-scale empirical evaluation, we show that private importance weighting provides simple and effective privacy-compliant augmentation for general applications of synthetic data.} }
Endnote
%0 Conference Paper %T Mitigating statistical bias within differentially private synthetic data %A Sahra Ghalebikesabi %A Harry Wilde %A Jack Jewson %A Arnaud Doucet %A Sebastian Vollmer %A Chris Holmes %B Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2022 %E James Cussens %E Kun Zhang %F pmlr-v180-ghalebikesabi22a %I PMLR %P 696--705 %U https://proceedings.mlr.press/v180/ghalebikesabi22a.html %V 180 %X Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data. However, mechanisms of privacy preservation can significantly reduce the utility of synthetic data, which in turn impacts downstream tasks such as learning predictive models or inference. We propose several re-weighting strategies using privatised likelihood ratios that not only mitigate statistical bias of downstream estimators but also have general applicability to differentially private generative models. Through large-scale empirical evaluation, we show that private importance weighting provides simple and effective privacy-compliant augmentation for general applications of synthetic data.
APA
Ghalebikesabi, S., Wilde, H., Jewson, J., Doucet, A., Vollmer, S. & Holmes, C.. (2022). Mitigating statistical bias within differentially private synthetic data. Proceedings of the Thirty-Eighth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 180:696-705 Available from https://proceedings.mlr.press/v180/ghalebikesabi22a.html.

Related Material