De-mark: Watermark Removal in Large Language Models

Ruibo Chen, Yihan Wu, Junfeng Guo, Heng Huang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:9316-9333, 2025.

Abstract

Watermarking techniques offer a promising way to identify machine-generated content via embedding covert information into the contents generated from language models (LMs). However, the robustness of the watermarking schemes has not been well explored. In this paper, we present De-mark, an advanced framework designed to remove n-gram-based watermarks effectively. Our method utilizes a novel querying strategy, termed random selection probing, which aids in assessing the strength of the watermark and identifying the red-green list within the n-gram watermark. Experiments on popular LMs, such as Llama3 and ChatGPT, demonstrate the efficiency and effectiveness of De-mark in watermark removal and exploitation tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chen25bq, title = {De-mark: Watermark Removal in Large Language Models}, author = {Chen, Ruibo and Wu, Yihan and Guo, Junfeng and Huang, Heng}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {9316--9333}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chen25bq/chen25bq.pdf}, url = {https://proceedings.mlr.press/v267/chen25bq.html}, abstract = {Watermarking techniques offer a promising way to identify machine-generated content via embedding covert information into the contents generated from language models (LMs). However, the robustness of the watermarking schemes has not been well explored. In this paper, we present De-mark, an advanced framework designed to remove n-gram-based watermarks effectively. Our method utilizes a novel querying strategy, termed random selection probing, which aids in assessing the strength of the watermark and identifying the red-green list within the n-gram watermark. Experiments on popular LMs, such as Llama3 and ChatGPT, demonstrate the efficiency and effectiveness of De-mark in watermark removal and exploitation tasks.} }
Endnote
%0 Conference Paper %T De-mark: Watermark Removal in Large Language Models %A Ruibo Chen %A Yihan Wu %A Junfeng Guo %A Heng Huang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chen25bq %I PMLR %P 9316--9333 %U https://proceedings.mlr.press/v267/chen25bq.html %V 267 %X Watermarking techniques offer a promising way to identify machine-generated content via embedding covert information into the contents generated from language models (LMs). However, the robustness of the watermarking schemes has not been well explored. In this paper, we present De-mark, an advanced framework designed to remove n-gram-based watermarks effectively. Our method utilizes a novel querying strategy, termed random selection probing, which aids in assessing the strength of the watermark and identifying the red-green list within the n-gram watermark. Experiments on popular LMs, such as Llama3 and ChatGPT, demonstrate the efficiency and effectiveness of De-mark in watermark removal and exploitation tasks.
APA
Chen, R., Wu, Y., Guo, J. & Huang, H.. (2025). De-mark: Watermark Removal in Large Language Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:9316-9333 Available from https://proceedings.mlr.press/v267/chen25bq.html.

Related Material