Deduplicating Training Data Mitigates Privacy Risks in Language Models

Nikhil Kandpal, Eric Wallace, Colin Raffel
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:10697-10707, 2022.

Abstract

Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence’s count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated 1000x more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-kandpal22a, title = {Deduplicating Training Data Mitigates Privacy Risks in Language Models}, author = {Kandpal, Nikhil and Wallace, Eric and Raffel, Colin}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {10697--10707}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/kandpal22a/kandpal22a.pdf}, url = {https://proceedings.mlr.press/v162/kandpal22a.html}, abstract = {Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence’s count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated 1000x more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.} }
Endnote
%0 Conference Paper %T Deduplicating Training Data Mitigates Privacy Risks in Language Models %A Nikhil Kandpal %A Eric Wallace %A Colin Raffel %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-kandpal22a %I PMLR %P 10697--10707 %U https://proceedings.mlr.press/v162/kandpal22a.html %V 162 %X Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence’s count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated 1000x more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.
APA
Kandpal, N., Wallace, E. & Raffel, C.. (2022). Deduplicating Training Data Mitigates Privacy Risks in Language Models. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:10697-10707 Available from https://proceedings.mlr.press/v162/kandpal22a.html.

Related Material