LaVAN: Localized and Visible Adversarial Noise

Danny Karmon, Daniel Zoran, Yoav Goldberg
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2507-2515, 2018.

Abstract

Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-karmon18a, title = {{L}a{VAN}: Localized and Visible Adversarial Noise}, author = {Karmon, Danny and Zoran, Daniel and Goldberg, Yoav}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {2507--2515}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/karmon18a/karmon18a.pdf}, url = {https://proceedings.mlr.press/v80/karmon18a.html}, abstract = {Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.} }
Endnote
%0 Conference Paper %T LaVAN: Localized and Visible Adversarial Noise %A Danny Karmon %A Daniel Zoran %A Yoav Goldberg %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-karmon18a %I PMLR %P 2507--2515 %U https://proceedings.mlr.press/v80/karmon18a.html %V 80 %X Most works on adversarial examples for deep-learning based image classifiers use noise that, while small, covers the entire image. We explore the case where the noise is allowed to be visible but confined to a small, localized patch of the image, without covering any of the main object(s) in the image. We show that it is possible to generate localized adversarial noises that cover only 2% of the pixels in the image, none of them over the main object, and that are transferable across images and locations, and successfully fool a state-of-the-art Inception v3 model with very high success rates.
APA
Karmon, D., Zoran, D. & Goldberg, Y.. (2018). LaVAN: Localized and Visible Adversarial Noise. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:2507-2515 Available from https://proceedings.mlr.press/v80/karmon18a.html.

Related Material