Proper Network Interpretability Helps Adversarial Robustness in Classification

Akhilan Boopathy, Sijia Liu, Gaoyuan Zhang, Cynthia Liu, Pin-Yu Chen, Shiyu Chang, Luca Daniel
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:1014-1023, 2020.

Abstract

Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with a proper measurement of interpretation, it is actually difficult to prevent prediction-evasion adversarial attacks from causing interpretation discrepancy, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop an interpretability-aware defensive scheme built only on promoting robust interpretation (without the need for resorting to adversarial loss minimization). We show that our defense achieves both robust classification and robust interpretation, outperforming state-of-the-art adversarial training methods against attacks of large perturbation in particular.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-boopathy20a, title = {Proper Network Interpretability Helps Adversarial Robustness in Classification}, author = {Boopathy, Akhilan and Liu, Sijia and Zhang, Gaoyuan and Liu, Cynthia and Chen, Pin-Yu and Chang, Shiyu and Daniel, Luca}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {1014--1023}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/boopathy20a/boopathy20a.pdf}, url = {https://proceedings.mlr.press/v119/boopathy20a.html}, abstract = {Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with a proper measurement of interpretation, it is actually difficult to prevent prediction-evasion adversarial attacks from causing interpretation discrepancy, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop an interpretability-aware defensive scheme built only on promoting robust interpretation (without the need for resorting to adversarial loss minimization). We show that our defense achieves both robust classification and robust interpretation, outperforming state-of-the-art adversarial training methods against attacks of large perturbation in particular.} }
Endnote
%0 Conference Paper %T Proper Network Interpretability Helps Adversarial Robustness in Classification %A Akhilan Boopathy %A Sijia Liu %A Gaoyuan Zhang %A Cynthia Liu %A Pin-Yu Chen %A Shiyu Chang %A Luca Daniel %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-boopathy20a %I PMLR %P 1014--1023 %U https://proceedings.mlr.press/v119/boopathy20a.html %V 119 %X Recent works have empirically shown that there exist adversarial examples that can be hidden from neural network interpretability (namely, making network interpretation maps visually similar), or interpretability is itself susceptible to adversarial attacks. In this paper, we theoretically show that with a proper measurement of interpretation, it is actually difficult to prevent prediction-evasion adversarial attacks from causing interpretation discrepancy, as confirmed by experiments on MNIST, CIFAR-10 and Restricted ImageNet. Spurred by that, we develop an interpretability-aware defensive scheme built only on promoting robust interpretation (without the need for resorting to adversarial loss minimization). We show that our defense achieves both robust classification and robust interpretation, outperforming state-of-the-art adversarial training methods against attacks of large perturbation in particular.
APA
Boopathy, A., Liu, S., Zhang, G., Liu, C., Chen, P., Chang, S. & Daniel, L.. (2020). Proper Network Interpretability Helps Adversarial Robustness in Classification. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:1014-1023 Available from https://proceedings.mlr.press/v119/boopathy20a.html.

Related Material