DE-COP: Detecting Copyrighted Content in Language Models Training Data

André Vicente Duarte, Xuandong Zhao, Arlindo L. Oliveira, Lei Li
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:11940-11956, 2024.

Abstract

How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content is included in training. DE-COP’s core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model’s training cutoff, along with their paraphrases. Our experiments show that DE-COP outperforms the prior best method by 8.6% in detection accuracy (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give approximately 0% accuracy. The code and datasets are available at https://github.com/LeiLiLab/DE-COP.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-duarte24a, title = {{DE}-{COP}: Detecting Copyrighted Content in Language Models Training Data}, author = {Duarte, Andr\'{e} Vicente and Zhao, Xuandong and Oliveira, Arlindo L. and Li, Lei}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {11940--11956}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/duarte24a/duarte24a.pdf}, url = {https://proceedings.mlr.press/v235/duarte24a.html}, abstract = {How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content is included in training. DE-COP’s core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model’s training cutoff, along with their paraphrases. Our experiments show that DE-COP outperforms the prior best method by 8.6% in detection accuracy (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give approximately 0% accuracy. The code and datasets are available at https://github.com/LeiLiLab/DE-COP.} }
Endnote
%0 Conference Paper %T DE-COP: Detecting Copyrighted Content in Language Models Training Data %A André Vicente Duarte %A Xuandong Zhao %A Arlindo L. Oliveira %A Lei Li %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-duarte24a %I PMLR %P 11940--11956 %U https://proceedings.mlr.press/v235/duarte24a.html %V 235 %X How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content is included in training. DE-COP’s core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model’s training cutoff, along with their paraphrases. Our experiments show that DE-COP outperforms the prior best method by 8.6% in detection accuracy (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give approximately 0% accuracy. The code and datasets are available at https://github.com/LeiLiLab/DE-COP.
APA
Duarte, A.V., Zhao, X., Oliveira, A.L. & Li, L.. (2024). DE-COP: Detecting Copyrighted Content in Language Models Training Data. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:11940-11956 Available from https://proceedings.mlr.press/v235/duarte24a.html.

Related Material