Adversarial Attacks on Copyright Detection Systems

Parsa Saadatpanah, Ali Shafahi, Tom Goldstein
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:8307-8315, 2020.

Abstract

It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net. We then attack this system using simple gradient methods and show that it is easily broken with white-box attacks. By scaling these perturbations up, we can create transfer attacks on industrial systems, such as the AudioTag copyright detector and YouTube’s Content ID system, using perturbations that are audible but significantly smaller than a random baseline. Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-saadatpanah20a, title = {Adversarial Attacks on Copyright Detection Systems}, author = {Saadatpanah, Parsa and Shafahi, Ali and Goldstein, Tom}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {8307--8315}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/saadatpanah20a/saadatpanah20a.pdf}, url = {http://proceedings.mlr.press/v119/saadatpanah20a.html}, abstract = {It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net. We then attack this system using simple gradient methods and show that it is easily broken with white-box attacks. By scaling these perturbations up, we can create transfer attacks on industrial systems, such as the AudioTag copyright detector and YouTube’s Content ID system, using perturbations that are audible but significantly smaller than a random baseline. Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks.} }
Endnote
%0 Conference Paper %T Adversarial Attacks on Copyright Detection Systems %A Parsa Saadatpanah %A Ali Shafahi %A Tom Goldstein %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-saadatpanah20a %I PMLR %P 8307--8315 %U http://proceedings.mlr.press/v119/saadatpanah20a.html %V 119 %X It is well-known that many machine learning models are susceptible to adversarial attacks, in which an attacker evades a classifier by making small perturbations to inputs. This paper discusses how industrial copyright detection tools, which serve a central role on the web, are susceptible to adversarial attacks. As proof of concept, we describe a well-known music identification method and implement this system in the form of a neural net. We then attack this system using simple gradient methods and show that it is easily broken with white-box attacks. By scaling these perturbations up, we can create transfer attacks on industrial systems, such as the AudioTag copyright detector and YouTube’s Content ID system, using perturbations that are audible but significantly smaller than a random baseline. Our goal is to raise awareness of the threats posed by adversarial examples in this space and to highlight the importance of hardening copyright detection systems to attacks.
APA
Saadatpanah, P., Shafahi, A. & Goldstein, T.. (2020). Adversarial Attacks on Copyright Detection Systems. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:8307-8315 Available from http://proceedings.mlr.press/v119/saadatpanah20a.html.

Related Material