On the Connection Between Adversarial Robustness and Saliency Map Interpretability

Christian Etmann, Sebastian Lunz, Peter Maass, Carola Schoenlieb
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1823-1832, 2019.

Abstract

Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts. We aim to quantify this behaviour by considering the alignment between input image and saliency map. We hypothesize that as the distance to the decision boundary grows, so does the alignment. This connection is strictly true in the case of linear models. We confirm these theoretical findings with experiments based on models trained with a local Lipschitz regularization and identify where the nonlinear nature of neural networks weakens the relation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-etmann19a, title = {On the Connection Between Adversarial Robustness and Saliency Map Interpretability}, author = {Etmann, Christian and Lunz, Sebastian and Maass, Peter and Schoenlieb, Carola}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1823--1832}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/etmann19a/etmann19a.pdf}, url = {https://proceedings.mlr.press/v97/etmann19a.html}, abstract = {Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts. We aim to quantify this behaviour by considering the alignment between input image and saliency map. We hypothesize that as the distance to the decision boundary grows, so does the alignment. This connection is strictly true in the case of linear models. We confirm these theoretical findings with experiments based on models trained with a local Lipschitz regularization and identify where the nonlinear nature of neural networks weakens the relation.} }
Endnote
%0 Conference Paper %T On the Connection Between Adversarial Robustness and Saliency Map Interpretability %A Christian Etmann %A Sebastian Lunz %A Peter Maass %A Carola Schoenlieb %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-etmann19a %I PMLR %P 1823--1832 %U https://proceedings.mlr.press/v97/etmann19a.html %V 97 %X Recent studies on the adversarial vulnerability of neural networks have shown that models trained to be more robust to adversarial attacks exhibit more interpretable saliency maps than their non-robust counterparts. We aim to quantify this behaviour by considering the alignment between input image and saliency map. We hypothesize that as the distance to the decision boundary grows, so does the alignment. This connection is strictly true in the case of linear models. We confirm these theoretical findings with experiments based on models trained with a local Lipschitz regularization and identify where the nonlinear nature of neural networks weakens the relation.
APA
Etmann, C., Lunz, S., Maass, P. & Schoenlieb, C.. (2019). On the Connection Between Adversarial Robustness and Saliency Map Interpretability. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1823-1832 Available from https://proceedings.mlr.press/v97/etmann19a.html.

Related Material