Label differential privacy and private training data release

Robert Istvan Busa-Fekete, Andres Munoz Medina, Umar Syed, Sergei Vassilvitskii
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:3233-3251, 2023.

Abstract

We study differentially private mechanisms for sharing training data in machine learning settings. Our goal is to enable learning of an accurate predictive model while protecting the privacy of each user’s label. Previous work established privacy guarantees that assumed the features are public and given exogenously, a setting known as label differential privacy. In some scenarios, this can be a strong assumption that removes the interplay between features and labels from the privacy analysis. We relax this approach and instead assume the features are drawn from a distribution that depends on the private labels. We first show that simply adding noise to the label, as in previous work, can lead to an arbitrarily weak privacy guarantee, and also present methods for estimating this privacy loss from data. We then present a new mechanism that replaces some training examples with synthetically generated data, and show that our mechanism has a much better privacy-utility tradeoff if the synthetic data is ‘realistic’, in a certain quantifiable sense. Finally, we empirically validate our theoretical analysis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-busa-fekete23a, title = {Label differential privacy and private training data release}, author = {Busa-Fekete, Robert Istvan and Munoz Medina, Andres and Syed, Umar and Vassilvitskii, Sergei}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {3233--3251}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/busa-fekete23a/busa-fekete23a.pdf}, url = {https://proceedings.mlr.press/v202/busa-fekete23a.html}, abstract = {We study differentially private mechanisms for sharing training data in machine learning settings. Our goal is to enable learning of an accurate predictive model while protecting the privacy of each user’s label. Previous work established privacy guarantees that assumed the features are public and given exogenously, a setting known as label differential privacy. In some scenarios, this can be a strong assumption that removes the interplay between features and labels from the privacy analysis. We relax this approach and instead assume the features are drawn from a distribution that depends on the private labels. We first show that simply adding noise to the label, as in previous work, can lead to an arbitrarily weak privacy guarantee, and also present methods for estimating this privacy loss from data. We then present a new mechanism that replaces some training examples with synthetically generated data, and show that our mechanism has a much better privacy-utility tradeoff if the synthetic data is ‘realistic’, in a certain quantifiable sense. Finally, we empirically validate our theoretical analysis.} }
Endnote
%0 Conference Paper %T Label differential privacy and private training data release %A Robert Istvan Busa-Fekete %A Andres Munoz Medina %A Umar Syed %A Sergei Vassilvitskii %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-busa-fekete23a %I PMLR %P 3233--3251 %U https://proceedings.mlr.press/v202/busa-fekete23a.html %V 202 %X We study differentially private mechanisms for sharing training data in machine learning settings. Our goal is to enable learning of an accurate predictive model while protecting the privacy of each user’s label. Previous work established privacy guarantees that assumed the features are public and given exogenously, a setting known as label differential privacy. In some scenarios, this can be a strong assumption that removes the interplay between features and labels from the privacy analysis. We relax this approach and instead assume the features are drawn from a distribution that depends on the private labels. We first show that simply adding noise to the label, as in previous work, can lead to an arbitrarily weak privacy guarantee, and also present methods for estimating this privacy loss from data. We then present a new mechanism that replaces some training examples with synthetically generated data, and show that our mechanism has a much better privacy-utility tradeoff if the synthetic data is ‘realistic’, in a certain quantifiable sense. Finally, we empirically validate our theoretical analysis.
APA
Busa-Fekete, R.I., Munoz Medina, A., Syed, U. & Vassilvitskii, S.. (2023). Label differential privacy and private training data release. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:3233-3251 Available from https://proceedings.mlr.press/v202/busa-fekete23a.html.

Related Material