The Odds are Odd: A Statistical Test for Detecting Adversarial Examples

Kevin Roth, Yannic Kilcher, Thomas Hofmann
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5498-5507, 2019.

Abstract

We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-roth19a, title = {The Odds are Odd: A Statistical Test for Detecting Adversarial Examples}, author = {Roth, Kevin and Kilcher, Yannic and Hofmann, Thomas}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5498--5507}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/roth19a/roth19a.pdf}, url = {https://proceedings.mlr.press/v97/roth19a.html}, abstract = {We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy.} }
Endnote
%0 Conference Paper %T The Odds are Odd: A Statistical Test for Detecting Adversarial Examples %A Kevin Roth %A Yannic Kilcher %A Thomas Hofmann %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-roth19a %I PMLR %P 5498--5507 %U https://proceedings.mlr.press/v97/roth19a.html %V 97 %X We investigate conditions under which test statistics exist that can reliably detect examples, which have been adversarially manipulated in a white-box attack. These statistics can be easily computed and calibrated by randomly corrupting inputs. They exploit certain anomalies that adversarial attacks introduce, in particular if they follow the paradigm of choosing perturbations optimally under p-norm constraints. Access to the log-odds is the only requirement to defend models. We justify our approach empirically, but also provide conditions under which detectability via the suggested test statistics is guaranteed to be effective. In our experiments, we show that it is even possible to correct test time predictions for adversarial attacks with high accuracy.
APA
Roth, K., Kilcher, Y. & Hofmann, T.. (2019). The Odds are Odd: A Statistical Test for Detecting Adversarial Examples. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5498-5507 Available from https://proceedings.mlr.press/v97/roth19a.html.

Related Material