Markpainting: Adversarial Machine Learning meets Inpainting

David Khachaturov, Ilia Shumailov, Yiren Zhao, Nicolas Papernot, Ross Anderson
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:5409-5419, 2021.

Abstract

Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting. Source code is available at: https://github.com/iliaishacked/markpainting.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-khachaturov21a, title = {Markpainting: Adversarial Machine Learning meets Inpainting}, author = {Khachaturov, David and Shumailov, Ilia and Zhao, Yiren and Papernot, Nicolas and Anderson, Ross}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {5409--5419}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/khachaturov21a/khachaturov21a.pdf}, url = {https://proceedings.mlr.press/v139/khachaturov21a.html}, abstract = {Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting. Source code is available at: https://github.com/iliaishacked/markpainting.} }
Endnote
%0 Conference Paper %T Markpainting: Adversarial Machine Learning meets Inpainting %A David Khachaturov %A Ilia Shumailov %A Yiren Zhao %A Nicolas Papernot %A Ross Anderson %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-khachaturov21a %I PMLR %P 5409--5419 %U https://proceedings.mlr.press/v139/khachaturov21a.html %V 139 %X Inpainting is a learned interpolation technique that is based on generative modeling and used to populate masked or missing pieces in an image; it has wide applications in picture editing and retouching. Recently, inpainting started being used for watermark removal, raising concerns. In this paper we study how to manipulate it using our markpainting technique. First, we show how an image owner with access to an inpainting model can augment their image in such a way that any attempt to edit it using that model will add arbitrary visible information. We find that we can target multiple different models simultaneously with our technique. This can be designed to reconstitute a watermark if the editor had been trying to remove it. Second, we show that our markpainting technique is transferable to models that have different architectures or were trained on different datasets, so watermarks created using it are difficult for adversaries to remove. Markpainting is novel and can be used as a manipulation alarm that becomes visible in the event of inpainting. Source code is available at: https://github.com/iliaishacked/markpainting.
APA
Khachaturov, D., Shumailov, I., Zhao, Y., Papernot, N. & Anderson, R.. (2021). Markpainting: Adversarial Machine Learning meets Inpainting. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:5409-5419 Available from https://proceedings.mlr.press/v139/khachaturov21a.html.

Related Material