Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection

Tianci Liu, Tong Yang, Quan Zhang, Qi Lei
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:2769-2785, 2025.

Abstract

As AI advances, copyrighted content faces growing risk of unauthorized use, whether through model training or direct misuse. Building upon invisible adversarial perturbation, recent works developed copyright protections against specific AI techniques such as unauthorized personalization through DreamBooth that are misused. However, these methods offer only short-term security, as they require retraining whenever the underlying model architectures change. To establish long-term protection aiming at better robustness, we go beyond invisible perturbation, and propose a universal approach that embeds \textit{visible} watermarks that are \textit{hard-to-remove} into images. Grounded in a new probabilistic and inverse problem-based formulation, our framework maximizes the discrepancy between the \textit{optimal} reconstruction and the original content. We develop an effective and efficient approximation algorithm to circumvent a intractable bi-level optimization. Experimental results demonstrate superiority of our approach across diverse scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-liu25f, title = {Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection}, author = {Liu, Tianci and Yang, Tong and Zhang, Quan and Lei, Qi}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {2769--2785}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/liu25f/liu25f.pdf}, url = {https://proceedings.mlr.press/v286/liu25f.html}, abstract = {As AI advances, copyrighted content faces growing risk of unauthorized use, whether through model training or direct misuse. Building upon invisible adversarial perturbation, recent works developed copyright protections against specific AI techniques such as unauthorized personalization through DreamBooth that are misused. However, these methods offer only short-term security, as they require retraining whenever the underlying model architectures change. To establish long-term protection aiming at better robustness, we go beyond invisible perturbation, and propose a universal approach that embeds \textit{visible} watermarks that are \textit{hard-to-remove} into images. Grounded in a new probabilistic and inverse problem-based formulation, our framework maximizes the discrepancy between the \textit{optimal} reconstruction and the original content. We develop an effective and efficient approximation algorithm to circumvent a intractable bi-level optimization. Experimental results demonstrate superiority of our approach across diverse scenarios.} }
Endnote
%0 Conference Paper %T Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection %A Tianci Liu %A Tong Yang %A Quan Zhang %A Qi Lei %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-liu25f %I PMLR %P 2769--2785 %U https://proceedings.mlr.press/v286/liu25f.html %V 286 %X As AI advances, copyrighted content faces growing risk of unauthorized use, whether through model training or direct misuse. Building upon invisible adversarial perturbation, recent works developed copyright protections against specific AI techniques such as unauthorized personalization through DreamBooth that are misused. However, these methods offer only short-term security, as they require retraining whenever the underlying model architectures change. To establish long-term protection aiming at better robustness, we go beyond invisible perturbation, and propose a universal approach that embeds \textit{visible} watermarks that are \textit{hard-to-remove} into images. Grounded in a new probabilistic and inverse problem-based formulation, our framework maximizes the discrepancy between the \textit{optimal} reconstruction and the original content. We develop an effective and efficient approximation algorithm to circumvent a intractable bi-level optimization. Experimental results demonstrate superiority of our approach across diverse scenarios.
APA
Liu, T., Yang, T., Zhang, Q. & Lei, Q.. (2025). Beyond Invisibility: Learning Robust Visible Watermarks for Stronger Copyright Protection. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:2769-2785 Available from https://proceedings.mlr.press/v286/liu25f.html.

Related Material