Label differential privacy via clustering

Hossein Esfandiari, Vahab Mirrokni, Umar Syed, Sergei Vassilvitskii
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:7055-7075, 2022.

Abstract

We present new mechanisms for label differential privacy, a relaxation of differentially private machine learning that only protects the privacy of the labels in the training set. Our mechanisms cluster the examples in the training set using their (non-private) feature vectors, randomly re-sample each label from examples in the same cluster, and output a training set with noisy labels as well as a modified version of the true loss function. We prove that when the clusters are both large and high-quality, the model that minimizes the modified loss on the noisy training set converges to small excess risk at a rate that is comparable to the rate for non-private learning. We also describe a learning problem in which large clusters are necessary to achieve both strong privacy and either good precision or good recall. Our experiments show that randomizing the labels within each cluster significantly improves the privacy vs. accuracy trade-off compared to applying uniform randomized response to the labels, and also compared to learning a model via DP-SGD.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-esfandiari22a, title = { Label differential privacy via clustering }, author = {Esfandiari, Hossein and Mirrokni, Vahab and Syed, Umar and Vassilvitskii, Sergei}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {7055--7075}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/esfandiari22a/esfandiari22a.pdf}, url = {https://proceedings.mlr.press/v151/esfandiari22a.html}, abstract = { We present new mechanisms for label differential privacy, a relaxation of differentially private machine learning that only protects the privacy of the labels in the training set. Our mechanisms cluster the examples in the training set using their (non-private) feature vectors, randomly re-sample each label from examples in the same cluster, and output a training set with noisy labels as well as a modified version of the true loss function. We prove that when the clusters are both large and high-quality, the model that minimizes the modified loss on the noisy training set converges to small excess risk at a rate that is comparable to the rate for non-private learning. We also describe a learning problem in which large clusters are necessary to achieve both strong privacy and either good precision or good recall. Our experiments show that randomizing the labels within each cluster significantly improves the privacy vs. accuracy trade-off compared to applying uniform randomized response to the labels, and also compared to learning a model via DP-SGD. } }
Endnote
%0 Conference Paper %T Label differential privacy via clustering %A Hossein Esfandiari %A Vahab Mirrokni %A Umar Syed %A Sergei Vassilvitskii %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-esfandiari22a %I PMLR %P 7055--7075 %U https://proceedings.mlr.press/v151/esfandiari22a.html %V 151 %X We present new mechanisms for label differential privacy, a relaxation of differentially private machine learning that only protects the privacy of the labels in the training set. Our mechanisms cluster the examples in the training set using their (non-private) feature vectors, randomly re-sample each label from examples in the same cluster, and output a training set with noisy labels as well as a modified version of the true loss function. We prove that when the clusters are both large and high-quality, the model that minimizes the modified loss on the noisy training set converges to small excess risk at a rate that is comparable to the rate for non-private learning. We also describe a learning problem in which large clusters are necessary to achieve both strong privacy and either good precision or good recall. Our experiments show that randomizing the labels within each cluster significantly improves the privacy vs. accuracy trade-off compared to applying uniform randomized response to the labels, and also compared to learning a model via DP-SGD.
APA
Esfandiari, H., Mirrokni, V., Syed, U. & Vassilvitskii, S.. (2022). Label differential privacy via clustering . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:7055-7075 Available from https://proceedings.mlr.press/v151/esfandiari22a.html.

Related Material