Privacy Amplification Through Synthetic Data: Insights from Linear Regression

Clément Pierquin, Aurélien Bellet, Marc Tommasi, Matthieu Boussard
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:49329-49354, 2025.

Abstract

Synthetic data inherits the differential privacy guarantees of the model used to generate it. Additionally, synthetic data may benefit from privacy amplification when the generative model is kept hidden. While empirical studies suggest this phenomenon, a rigorous theoretical understanding is still lacking. In this paper, we investigate this question through the well-understood framework of linear regression. First, we establish negative results showing that if an adversary controls the seed of the generative model, a single synthetic data point can leak as much information as releasing the model itself. Conversely, we show that when synthetic data is generated from random inputs, releasing a limited number of synthetic data points amplifies privacy beyond the model’s inherent guarantees. We believe our findings in linear regression can serve as a foundation for deriving more general bounds in the future.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-pierquin25a, title = {Privacy Amplification Through Synthetic Data: Insights from Linear Regression}, author = {Pierquin, Cl\'{e}ment and Bellet, Aur\'{e}lien and Tommasi, Marc and Boussard, Matthieu}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {49329--49354}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/pierquin25a/pierquin25a.pdf}, url = {https://proceedings.mlr.press/v267/pierquin25a.html}, abstract = {Synthetic data inherits the differential privacy guarantees of the model used to generate it. Additionally, synthetic data may benefit from privacy amplification when the generative model is kept hidden. While empirical studies suggest this phenomenon, a rigorous theoretical understanding is still lacking. In this paper, we investigate this question through the well-understood framework of linear regression. First, we establish negative results showing that if an adversary controls the seed of the generative model, a single synthetic data point can leak as much information as releasing the model itself. Conversely, we show that when synthetic data is generated from random inputs, releasing a limited number of synthetic data points amplifies privacy beyond the model’s inherent guarantees. We believe our findings in linear regression can serve as a foundation for deriving more general bounds in the future.} }
Endnote
%0 Conference Paper %T Privacy Amplification Through Synthetic Data: Insights from Linear Regression %A Clément Pierquin %A Aurélien Bellet %A Marc Tommasi %A Matthieu Boussard %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-pierquin25a %I PMLR %P 49329--49354 %U https://proceedings.mlr.press/v267/pierquin25a.html %V 267 %X Synthetic data inherits the differential privacy guarantees of the model used to generate it. Additionally, synthetic data may benefit from privacy amplification when the generative model is kept hidden. While empirical studies suggest this phenomenon, a rigorous theoretical understanding is still lacking. In this paper, we investigate this question through the well-understood framework of linear regression. First, we establish negative results showing that if an adversary controls the seed of the generative model, a single synthetic data point can leak as much information as releasing the model itself. Conversely, we show that when synthetic data is generated from random inputs, releasing a limited number of synthetic data points amplifies privacy beyond the model’s inherent guarantees. We believe our findings in linear regression can serve as a foundation for deriving more general bounds in the future.
APA
Pierquin, C., Bellet, A., Tommasi, M. & Boussard, M.. (2025). Privacy Amplification Through Synthetic Data: Insights from Linear Regression. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:49329-49354 Available from https://proceedings.mlr.press/v267/pierquin25a.html.

Related Material