Bounding Training Data Reconstruction in Private (Deep) Learning

Chuan Guo, Brian Karrer, Kamalika Chaudhuri, Laurens van der Maaten
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:8056-8071, 2022.

Abstract

Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary’s capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods—Renyi differential privacy and Fisher information leakage—both offer strong semantic protection against data reconstruction attacks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-guo22c, title = {Bounding Training Data Reconstruction in Private (Deep) Learning}, author = {Guo, Chuan and Karrer, Brian and Chaudhuri, Kamalika and van der Maaten, Laurens}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {8056--8071}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/guo22c/guo22c.pdf}, url = {https://proceedings.mlr.press/v162/guo22c.html}, abstract = {Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary’s capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods—Renyi differential privacy and Fisher information leakage—both offer strong semantic protection against data reconstruction attacks.} }
Endnote
%0 Conference Paper %T Bounding Training Data Reconstruction in Private (Deep) Learning %A Chuan Guo %A Brian Karrer %A Kamalika Chaudhuri %A Laurens van der Maaten %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-guo22c %I PMLR %P 8056--8071 %U https://proceedings.mlr.press/v162/guo22c.html %V 162 %X Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary’s capabilities and is not applicable when membership status itself is non-sensitive. In this paper, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods—Renyi differential privacy and Fisher information leakage—both offer strong semantic protection against data reconstruction attacks.
APA
Guo, C., Karrer, B., Chaudhuri, K. & van der Maaten, L.. (2022). Bounding Training Data Reconstruction in Private (Deep) Learning. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:8056-8071 Available from https://proceedings.mlr.press/v162/guo22c.html.

Related Material