Low Frequency Adversarial Perturbation

Chuan Guo, Jared S. Frank, Kilian Q. Weinberger
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:1127-1137, 2020.

Abstract

Adversarial images aim to change a target model’s decision by minimally perturbing a target image. In the black-box setting, the absence of gradient information often renders this search problem costly in terms of query complexity. In this paper we propose to restrict the search for adversarial images to a low frequency domain. This approach is readily compatible with many existing black-box attack frameworks and consistently reduces their query cost by 2 to 4 times. Further, we can circumvent image transformation defenses even when both the model and the defense strategy are unknown. Finally, we demonstrate the efficacy of this technique by fooling the Google Cloud Vision platform with an unprecedented low number of model queries.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-guo20a, title = {Low Frequency Adversarial Perturbation}, author = {Guo, Chuan and Frank, Jared S. and Weinberger, Kilian Q.}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {1127--1137}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/guo20a/guo20a.pdf}, url = {https://proceedings.mlr.press/v115/guo20a.html}, abstract = {Adversarial images aim to change a target model’s decision by minimally perturbing a target image. In the black-box setting, the absence of gradient information often renders this search problem costly in terms of query complexity. In this paper we propose to restrict the search for adversarial images to a low frequency domain. This approach is readily compatible with many existing black-box attack frameworks and consistently reduces their query cost by 2 to 4 times. Further, we can circumvent image transformation defenses even when both the model and the defense strategy are unknown. Finally, we demonstrate the efficacy of this technique by fooling the Google Cloud Vision platform with an unprecedented low number of model queries.} }
Endnote
%0 Conference Paper %T Low Frequency Adversarial Perturbation %A Chuan Guo %A Jared S. Frank %A Kilian Q. Weinberger %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-guo20a %I PMLR %P 1127--1137 %U https://proceedings.mlr.press/v115/guo20a.html %V 115 %X Adversarial images aim to change a target model’s decision by minimally perturbing a target image. In the black-box setting, the absence of gradient information often renders this search problem costly in terms of query complexity. In this paper we propose to restrict the search for adversarial images to a low frequency domain. This approach is readily compatible with many existing black-box attack frameworks and consistently reduces their query cost by 2 to 4 times. Further, we can circumvent image transformation defenses even when both the model and the defense strategy are unknown. Finally, we demonstrate the efficacy of this technique by fooling the Google Cloud Vision platform with an unprecedented low number of model queries.
APA
Guo, C., Frank, J.S. & Weinberger, K.Q.. (2020). Low Frequency Adversarial Perturbation. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:1127-1137 Available from https://proceedings.mlr.press/v115/guo20a.html.

Related Material