Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training

Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, Somesh Jha
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5334-5342, 2018.

Abstract

In this paper we study leveraging confidence information induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model. A natural measure of confidence is $\|F(x)\|_\infty$ (i.e. how confident $F$ is about its prediction?). We start by analyzing an adversarial training formulation proposed by Madry et al.. We demonstrate that, under a variety of instantiations, an only somewhat good solution to their objective induces confidence to be a discriminator, which can distinguish between right and wrong model predictions in a neighborhood of a point sampled from the underlying distribution. Based on this, we propose Highly Confident Near Neighbor (HCNN) a framework that combines confidence information and nearest neighbor search, to reinforce adversarial robustness of a base model. We give algorithms in this framework and perform a detailed empirical study. We report encouraging experimental results that support our analysis, and also discuss problems we observed with existing adversarial training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-wu18e, title = {Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training}, author = {Wu, Xi and Jang, Uyeong and Chen, Jiefeng and Chen, Lingjiao and Jha, Somesh}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5334--5342}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/wu18e/wu18e.pdf}, url = {https://proceedings.mlr.press/v80/wu18e.html}, abstract = {In this paper we study leveraging confidence information induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model. A natural measure of confidence is $\|F(x)\|_\infty$ (i.e. how confident $F$ is about its prediction?). We start by analyzing an adversarial training formulation proposed by Madry et al.. We demonstrate that, under a variety of instantiations, an only somewhat good solution to their objective induces confidence to be a discriminator, which can distinguish between right and wrong model predictions in a neighborhood of a point sampled from the underlying distribution. Based on this, we propose Highly Confident Near Neighbor (HCNN) a framework that combines confidence information and nearest neighbor search, to reinforce adversarial robustness of a base model. We give algorithms in this framework and perform a detailed empirical study. We report encouraging experimental results that support our analysis, and also discuss problems we observed with existing adversarial training.} }
Endnote
%0 Conference Paper %T Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training %A Xi Wu %A Uyeong Jang %A Jiefeng Chen %A Lingjiao Chen %A Somesh Jha %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-wu18e %I PMLR %P 5334--5342 %U https://proceedings.mlr.press/v80/wu18e.html %V 80 %X In this paper we study leveraging confidence information induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model. A natural measure of confidence is $\|F(x)\|_\infty$ (i.e. how confident $F$ is about its prediction?). We start by analyzing an adversarial training formulation proposed by Madry et al.. We demonstrate that, under a variety of instantiations, an only somewhat good solution to their objective induces confidence to be a discriminator, which can distinguish between right and wrong model predictions in a neighborhood of a point sampled from the underlying distribution. Based on this, we propose Highly Confident Near Neighbor (HCNN) a framework that combines confidence information and nearest neighbor search, to reinforce adversarial robustness of a base model. We give algorithms in this framework and perform a detailed empirical study. We report encouraging experimental results that support our analysis, and also discuss problems we observed with existing adversarial training.
APA
Wu, X., Jang, U., Chen, J., Chen, L. & Jha, S.. (2018). Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5334-5342 Available from https://proceedings.mlr.press/v80/wu18e.html.

Related Material