Margin-based Neural Network Watermarking

Byungjoo Kim, Suyoung Lee, Seanie Lee, Sooel Son, Sung Ju Hwang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:16696-16711, 2023.

Abstract

As Machine Learning as a Service (MLaaS) platforms become prevalent, deep neural network (DNN) watermarking techniques are gaining increasing attention, which enables one to verify the ownership of a target DNN model in a black-box scenario. Unfortunately, previous watermarking methods are vulnerable to functionality stealing attacks, thus allowing an adversary to falsely claim the ownership of a DNN model stolen from its original owner. In this work, we propose a novel margin-based DNN watermarking approach that is robust to the functionality stealing attacks based on model extraction and distillation. Specifically, during training, our method maximizes the margins of watermarked samples by using projected gradient ascent on them so that their predicted labels cannot change without compromising the accuracy of the model that the attacker tries to steal. We validate our method on multiple benchmarks and show that our watermarking method successfully defends against model extraction attacks, outperforming recent baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-kim23o, title = {Margin-based Neural Network Watermarking}, author = {Kim, Byungjoo and Lee, Suyoung and Lee, Seanie and Son, Sooel and Hwang, Sung Ju}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {16696--16711}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/kim23o/kim23o.pdf}, url = {https://proceedings.mlr.press/v202/kim23o.html}, abstract = {As Machine Learning as a Service (MLaaS) platforms become prevalent, deep neural network (DNN) watermarking techniques are gaining increasing attention, which enables one to verify the ownership of a target DNN model in a black-box scenario. Unfortunately, previous watermarking methods are vulnerable to functionality stealing attacks, thus allowing an adversary to falsely claim the ownership of a DNN model stolen from its original owner. In this work, we propose a novel margin-based DNN watermarking approach that is robust to the functionality stealing attacks based on model extraction and distillation. Specifically, during training, our method maximizes the margins of watermarked samples by using projected gradient ascent on them so that their predicted labels cannot change without compromising the accuracy of the model that the attacker tries to steal. We validate our method on multiple benchmarks and show that our watermarking method successfully defends against model extraction attacks, outperforming recent baselines.} }
Endnote
%0 Conference Paper %T Margin-based Neural Network Watermarking %A Byungjoo Kim %A Suyoung Lee %A Seanie Lee %A Sooel Son %A Sung Ju Hwang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-kim23o %I PMLR %P 16696--16711 %U https://proceedings.mlr.press/v202/kim23o.html %V 202 %X As Machine Learning as a Service (MLaaS) platforms become prevalent, deep neural network (DNN) watermarking techniques are gaining increasing attention, which enables one to verify the ownership of a target DNN model in a black-box scenario. Unfortunately, previous watermarking methods are vulnerable to functionality stealing attacks, thus allowing an adversary to falsely claim the ownership of a DNN model stolen from its original owner. In this work, we propose a novel margin-based DNN watermarking approach that is robust to the functionality stealing attacks based on model extraction and distillation. Specifically, during training, our method maximizes the margins of watermarked samples by using projected gradient ascent on them so that their predicted labels cannot change without compromising the accuracy of the model that the attacker tries to steal. We validate our method on multiple benchmarks and show that our watermarking method successfully defends against model extraction attacks, outperforming recent baselines.
APA
Kim, B., Lee, S., Lee, S., Son, S. & Hwang, S.J.. (2023). Margin-based Neural Network Watermarking. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:16696-16711 Available from https://proceedings.mlr.press/v202/kim23o.html.

Related Material