A principled approach for generating adversarial images under non-smooth dissimilarity metrics

Aram-Alexandre Pooladian, Chris Finlay, Tim Hoheisel, Adam Oberman
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1442-1452, 2020.

Abstract

Deep neural networks perform well on real world data but are prone to adversarial perturbations: small changes in the input easily lead to misclassification. In this work, we propose an attack methodology not only for cases where the perturbations are measured by Lp norms, but in fact any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, L1, L2, and L-infinity perturbations; the L0 counting "norm" (i.e. true sparseness); and the total variation seminorm, which is a (Lp) convolutional dissimilarity measuring local pixel changes. Our approach is a natural extension of a recent adversarial attack method, and eliminates the differentiability requirement of the metric. We demonstrate our algorithm, ProxLogBarrier, on the MNIST, CIFAR10, and ImageNet-1k datasets. We consider undefended and defended models, and show that our algorithm easily transfers to various datasets. We observe that ProxLogBarrier outperforms a host of modern adversarial attacks specialized for the L0 case. Moreover, by altering images in the total variation seminorm, we shed light on a new class of perturbations that exploit neighboring pixel information.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-pooladian20a, title = {A principled approach for generating adversarial images under non-smooth dissimilarity metrics}, author = {Pooladian, Aram-Alexandre and Finlay, Chris and Hoheisel, Tim and Oberman, Adam}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1442--1452}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/pooladian20a/pooladian20a.pdf}, url = {https://proceedings.mlr.press/v108/pooladian20a.html}, abstract = {Deep neural networks perform well on real world data but are prone to adversarial perturbations: small changes in the input easily lead to misclassification. In this work, we propose an attack methodology not only for cases where the perturbations are measured by Lp norms, but in fact any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, L1, L2, and L-infinity perturbations; the L0 counting "norm" (i.e. true sparseness); and the total variation seminorm, which is a (Lp) convolutional dissimilarity measuring local pixel changes. Our approach is a natural extension of a recent adversarial attack method, and eliminates the differentiability requirement of the metric. We demonstrate our algorithm, ProxLogBarrier, on the MNIST, CIFAR10, and ImageNet-1k datasets. We consider undefended and defended models, and show that our algorithm easily transfers to various datasets. We observe that ProxLogBarrier outperforms a host of modern adversarial attacks specialized for the L0 case. Moreover, by altering images in the total variation seminorm, we shed light on a new class of perturbations that exploit neighboring pixel information.} }
Endnote
%0 Conference Paper %T A principled approach for generating adversarial images under non-smooth dissimilarity metrics %A Aram-Alexandre Pooladian %A Chris Finlay %A Tim Hoheisel %A Adam Oberman %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-pooladian20a %I PMLR %P 1442--1452 %U https://proceedings.mlr.press/v108/pooladian20a.html %V 108 %X Deep neural networks perform well on real world data but are prone to adversarial perturbations: small changes in the input easily lead to misclassification. In this work, we propose an attack methodology not only for cases where the perturbations are measured by Lp norms, but in fact any adversarial dissimilarity metric with a closed proximal form. This includes, but is not limited to, L1, L2, and L-infinity perturbations; the L0 counting "norm" (i.e. true sparseness); and the total variation seminorm, which is a (Lp) convolutional dissimilarity measuring local pixel changes. Our approach is a natural extension of a recent adversarial attack method, and eliminates the differentiability requirement of the metric. We demonstrate our algorithm, ProxLogBarrier, on the MNIST, CIFAR10, and ImageNet-1k datasets. We consider undefended and defended models, and show that our algorithm easily transfers to various datasets. We observe that ProxLogBarrier outperforms a host of modern adversarial attacks specialized for the L0 case. Moreover, by altering images in the total variation seminorm, we shed light on a new class of perturbations that exploit neighboring pixel information.
APA
Pooladian, A., Finlay, C., Hoheisel, T. & Oberman, A.. (2020). A principled approach for generating adversarial images under non-smooth dissimilarity metrics. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1442-1452 Available from https://proceedings.mlr.press/v108/pooladian20a.html.

Related Material