Monge blunts Bayes: Hardness Results for Adversarial Training

Zac Cranko, Aditya Menon, Richard Nock, Cheng Soon Ong, Zhan Shi, Christian Walder
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1406-1415, 2019.

Abstract

The last few years have seen a staggering number of empirical studies of the robustness of neural networks in a model of adversarial perturbations of their inputs. Most rely on an adversary which carries out local modifications within prescribed balls. None however has so far questioned the broader picture: how to frame a resource-bounded adversary so that it can be severely detrimental to learning, a non-trivial problem which entails at a minimum the choice of loss and classifiers. We suggest a formal answer for losses that satisfy the minimal statistical requirement of being proper. We pin down a simple sufficient property for any given class of adversaries to be detrimental to learning, involving a central measure of “harmfulness” which generalizes the well-known class of integral probability metrics. A key feature of our result is that it holds for all proper losses, and for a popular subset of these, the optimisation of this central measure appears to be independent of the loss. When classifiers are Lipschitz – a now popular approach in adversarial training –, this optimisation resorts to optimal transport to make a low-budget compression of class marginals. Toy experiments reveal a finding recently separately observed: training against a sufficiently budgeted adversary of this kind improves generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-cranko19a, title = {Monge blunts Bayes: Hardness Results for Adversarial Training}, author = {Cranko, Zac and Menon, Aditya and Nock, Richard and Ong, Cheng Soon and Shi, Zhan and Walder, Christian}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1406--1415}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/cranko19a/cranko19a.pdf}, url = {https://proceedings.mlr.press/v97/cranko19a.html}, abstract = {The last few years have seen a staggering number of empirical studies of the robustness of neural networks in a model of adversarial perturbations of their inputs. Most rely on an adversary which carries out local modifications within prescribed balls. None however has so far questioned the broader picture: how to frame a resource-bounded adversary so that it can be severely detrimental to learning, a non-trivial problem which entails at a minimum the choice of loss and classifiers. We suggest a formal answer for losses that satisfy the minimal statistical requirement of being proper. We pin down a simple sufficient property for any given class of adversaries to be detrimental to learning, involving a central measure of “harmfulness” which generalizes the well-known class of integral probability metrics. A key feature of our result is that it holds for all proper losses, and for a popular subset of these, the optimisation of this central measure appears to be independent of the loss. When classifiers are Lipschitz – a now popular approach in adversarial training –, this optimisation resorts to optimal transport to make a low-budget compression of class marginals. Toy experiments reveal a finding recently separately observed: training against a sufficiently budgeted adversary of this kind improves generalization.} }
Endnote
%0 Conference Paper %T Monge blunts Bayes: Hardness Results for Adversarial Training %A Zac Cranko %A Aditya Menon %A Richard Nock %A Cheng Soon Ong %A Zhan Shi %A Christian Walder %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-cranko19a %I PMLR %P 1406--1415 %U https://proceedings.mlr.press/v97/cranko19a.html %V 97 %X The last few years have seen a staggering number of empirical studies of the robustness of neural networks in a model of adversarial perturbations of their inputs. Most rely on an adversary which carries out local modifications within prescribed balls. None however has so far questioned the broader picture: how to frame a resource-bounded adversary so that it can be severely detrimental to learning, a non-trivial problem which entails at a minimum the choice of loss and classifiers. We suggest a formal answer for losses that satisfy the minimal statistical requirement of being proper. We pin down a simple sufficient property for any given class of adversaries to be detrimental to learning, involving a central measure of “harmfulness” which generalizes the well-known class of integral probability metrics. A key feature of our result is that it holds for all proper losses, and for a popular subset of these, the optimisation of this central measure appears to be independent of the loss. When classifiers are Lipschitz – a now popular approach in adversarial training –, this optimisation resorts to optimal transport to make a low-budget compression of class marginals. Toy experiments reveal a finding recently separately observed: training against a sufficiently budgeted adversary of this kind improves generalization.
APA
Cranko, Z., Menon, A., Nock, R., Ong, C.S., Shi, Z. & Walder, C.. (2019). Monge blunts Bayes: Hardness Results for Adversarial Training. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1406-1415 Available from https://proceedings.mlr.press/v97/cranko19a.html.

Related Material