Generalized No Free Lunch Theorem for Adversarial Robustness

Elvis Dohmatob
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1646-1654, 2019.

Abstract

This manuscript presents some new impossibility results on adversarial robustness in machine learning, a very important yet largely open problem. We show that if conditioned on a class label the data distribution satisfies the $W_2$ Talagrand transportation-cost inequality (for example, this condition is satisfied if the conditional distribution has density which is log-concave; is the uniform measure on a compact Riemannian manifold with positive Ricci curvature, any classifier can be adversarially fooled with high probability once the perturbations are slightly greater than the natural noise level in the problem. We call this result The Strong "No Free Lunch" Theorem as some recent results (Tsipras et al. 2018, Fawzi et al. 2018, etc.) on the subject can be immediately recovered as very particular cases. Our theoretical bounds are demonstrated on both simulated and real data (MNIST). We conclude the manuscript with some speculation on possible future research directions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-dohmatob19a, title = {Generalized No Free Lunch Theorem for Adversarial Robustness}, author = {Dohmatob, Elvis}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1646--1654}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/dohmatob19a/dohmatob19a.pdf}, url = {https://proceedings.mlr.press/v97/dohmatob19a.html}, abstract = {This manuscript presents some new impossibility results on adversarial robustness in machine learning, a very important yet largely open problem. We show that if conditioned on a class label the data distribution satisfies the $W_2$ Talagrand transportation-cost inequality (for example, this condition is satisfied if the conditional distribution has density which is log-concave; is the uniform measure on a compact Riemannian manifold with positive Ricci curvature, any classifier can be adversarially fooled with high probability once the perturbations are slightly greater than the natural noise level in the problem. We call this result The Strong "No Free Lunch" Theorem as some recent results (Tsipras et al. 2018, Fawzi et al. 2018, etc.) on the subject can be immediately recovered as very particular cases. Our theoretical bounds are demonstrated on both simulated and real data (MNIST). We conclude the manuscript with some speculation on possible future research directions.} }
Endnote
%0 Conference Paper %T Generalized No Free Lunch Theorem for Adversarial Robustness %A Elvis Dohmatob %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-dohmatob19a %I PMLR %P 1646--1654 %U https://proceedings.mlr.press/v97/dohmatob19a.html %V 97 %X This manuscript presents some new impossibility results on adversarial robustness in machine learning, a very important yet largely open problem. We show that if conditioned on a class label the data distribution satisfies the $W_2$ Talagrand transportation-cost inequality (for example, this condition is satisfied if the conditional distribution has density which is log-concave; is the uniform measure on a compact Riemannian manifold with positive Ricci curvature, any classifier can be adversarially fooled with high probability once the perturbations are slightly greater than the natural noise level in the problem. We call this result The Strong "No Free Lunch" Theorem as some recent results (Tsipras et al. 2018, Fawzi et al. 2018, etc.) on the subject can be immediately recovered as very particular cases. Our theoretical bounds are demonstrated on both simulated and real data (MNIST). We conclude the manuscript with some speculation on possible future research directions.
APA
Dohmatob, E.. (2019). Generalized No Free Lunch Theorem for Adversarial Robustness. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1646-1654 Available from https://proceedings.mlr.press/v97/dohmatob19a.html.

Related Material