Membership Inference Attacks against Synthetic Data through Overfitting Detection

Boris van Breugel, Hao Sun, Zhaozhi Qian, Mihaela van der Schaar
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:3493-3514, 2023.

Abstract

Data is the foundation of most science. Unfortunately, sharing data can be obstructed by the risk of violating data privacy, impeding research in fields like healthcare. Synthetic data is a potential solution. It aims to generate data that has the same distribution as the original data, but that does not disclose information about individuals. Membership Inference Attacks (MIAs) are a common privacy attack, in which the attacker attempts to determine whether a particular real sample was used for training of the model. Previous works that propose MIAs against generative models either display low performance—giving the false impression that data is highly private—or need to assume access to internal generative model parameters—a relatively low-risk scenario, as the data publisher often only releases synthetic data, not the model. In this work we argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution. We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model. Experimentally we show that DOMIAS is significantly more successful at MIA than previous work, especially at attacking uncommon samples. The latter is disconcerting since these samples may correspond to underrepresented groups. We also demonstrate how DOMIAS’ MIA performance score provides an interpretable metric for privacy, giving data publishers a new tool for achieving the desired privacy-utility trade-off in their synthetic data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-breugel23a, title = {Membership Inference Attacks against Synthetic Data through Overfitting Detection}, author = {van Breugel, Boris and Sun, Hao and Qian, Zhaozhi and van der Schaar, Mihaela}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {3493--3514}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/breugel23a/breugel23a.pdf}, url = {https://proceedings.mlr.press/v206/breugel23a.html}, abstract = {Data is the foundation of most science. Unfortunately, sharing data can be obstructed by the risk of violating data privacy, impeding research in fields like healthcare. Synthetic data is a potential solution. It aims to generate data that has the same distribution as the original data, but that does not disclose information about individuals. Membership Inference Attacks (MIAs) are a common privacy attack, in which the attacker attempts to determine whether a particular real sample was used for training of the model. Previous works that propose MIAs against generative models either display low performance—giving the false impression that data is highly private—or need to assume access to internal generative model parameters—a relatively low-risk scenario, as the data publisher often only releases synthetic data, not the model. In this work we argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution. We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model. Experimentally we show that DOMIAS is significantly more successful at MIA than previous work, especially at attacking uncommon samples. The latter is disconcerting since these samples may correspond to underrepresented groups. We also demonstrate how DOMIAS’ MIA performance score provides an interpretable metric for privacy, giving data publishers a new tool for achieving the desired privacy-utility trade-off in their synthetic data.} }
Endnote
%0 Conference Paper %T Membership Inference Attacks against Synthetic Data through Overfitting Detection %A Boris van Breugel %A Hao Sun %A Zhaozhi Qian %A Mihaela van der Schaar %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-breugel23a %I PMLR %P 3493--3514 %U https://proceedings.mlr.press/v206/breugel23a.html %V 206 %X Data is the foundation of most science. Unfortunately, sharing data can be obstructed by the risk of violating data privacy, impeding research in fields like healthcare. Synthetic data is a potential solution. It aims to generate data that has the same distribution as the original data, but that does not disclose information about individuals. Membership Inference Attacks (MIAs) are a common privacy attack, in which the attacker attempts to determine whether a particular real sample was used for training of the model. Previous works that propose MIAs against generative models either display low performance—giving the false impression that data is highly private—or need to assume access to internal generative model parameters—a relatively low-risk scenario, as the data publisher often only releases synthetic data, not the model. In this work we argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution. We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model. Experimentally we show that DOMIAS is significantly more successful at MIA than previous work, especially at attacking uncommon samples. The latter is disconcerting since these samples may correspond to underrepresented groups. We also demonstrate how DOMIAS’ MIA performance score provides an interpretable metric for privacy, giving data publishers a new tool for achieving the desired privacy-utility trade-off in their synthetic data.
APA
van Breugel, B., Sun, H., Qian, Z. & van der Schaar, M.. (2023). Membership Inference Attacks against Synthetic Data through Overfitting Detection. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:3493-3514 Available from https://proceedings.mlr.press/v206/breugel23a.html.

Related Material