Adversarial Risk and the Dangers of Evaluating Against Weak Attacks

Jonathan Uesato, Brendan O’Donoghue, Pushmeet Kohli, Aaron Oord
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:5025-5034, 2018.

Abstract

This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-uesato18a, title = {Adversarial Risk and the Dangers of Evaluating Against Weak Attacks}, author = {Uesato, Jonathan and O'Donoghue, Brendan and Kohli, Pushmeet and van den Oord, Aaron}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {5025--5034}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/uesato18a/uesato18a.pdf}, url = {https://proceedings.mlr.press/v80/uesato18a.html}, abstract = {This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.} }
Endnote
%0 Conference Paper %T Adversarial Risk and the Dangers of Evaluating Against Weak Attacks %A Jonathan Uesato %A Brendan O’Donoghue %A Pushmeet Kohli %A Aaron Oord %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-uesato18a %I PMLR %P 5025--5034 %U https://proceedings.mlr.press/v80/uesato18a.html %V 80 %X This paper investigates recently proposed approaches for defending against adversarial examples and evaluating adversarial robustness. We motivate adversarial risk as an objective for achieving models robust to worst-case inputs. We then frame commonly used attacks and evaluation metrics as defining a tractable surrogate objective to the true adversarial risk. This suggests that models may optimize this surrogate rather than the true adversarial risk. We formalize this notion as obscurity to an adversary, and develop tools and heuristics for identifying obscured models and designing transparent models. We demonstrate that this is a significant problem in practice by repurposing gradient-free optimization techniques into adversarial attacks, which we use to decrease the accuracy of several recently proposed defenses to near zero. Our hope is that our formulations and results will help researchers to develop more powerful defenses.
APA
Uesato, J., O’Donoghue, B., Kohli, P. & Oord, A.. (2018). Adversarial Risk and the Dangers of Evaluating Against Weak Attacks. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:5025-5034 Available from https://proceedings.mlr.press/v80/uesato18a.html.

Related Material