Adversarially trained neural representations are already as robust as biological neural representations

Chong Guo, Michael Lee, Guillaume Leclerc, Joel Dapello, Yug Rao, Aleksander Madry, James Dicarlo
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:8072-8081, 2022.

Abstract

Visual systems of primates are the gold standard of robust perception. There is thus a general belief that mimicking the neural representations that underlie those systems will yield artificial visual systems that are adversarially robust. In this work, we develop a method for performing adversarial visual attacks directly on primate brain activity. We then leverage this method to demonstrate that the above-mentioned belief might not be well-founded. Specifically, we report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-guo22d, title = {Adversarially trained neural representations are already as robust as biological neural representations}, author = {Guo, Chong and Lee, Michael and Leclerc, Guillaume and Dapello, Joel and Rao, Yug and Madry, Aleksander and Dicarlo, James}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {8072--8081}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/guo22d/guo22d.pdf}, url = {https://proceedings.mlr.press/v162/guo22d.html}, abstract = {Visual systems of primates are the gold standard of robust perception. There is thus a general belief that mimicking the neural representations that underlie those systems will yield artificial visual systems that are adversarially robust. In this work, we develop a method for performing adversarial visual attacks directly on primate brain activity. We then leverage this method to demonstrate that the above-mentioned belief might not be well-founded. Specifically, we report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.} }
Endnote
%0 Conference Paper %T Adversarially trained neural representations are already as robust as biological neural representations %A Chong Guo %A Michael Lee %A Guillaume Leclerc %A Joel Dapello %A Yug Rao %A Aleksander Madry %A James Dicarlo %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-guo22d %I PMLR %P 8072--8081 %U https://proceedings.mlr.press/v162/guo22d.html %V 162 %X Visual systems of primates are the gold standard of robust perception. There is thus a general belief that mimicking the neural representations that underlie those systems will yield artificial visual systems that are adversarially robust. In this work, we develop a method for performing adversarial visual attacks directly on primate brain activity. We then leverage this method to demonstrate that the above-mentioned belief might not be well-founded. Specifically, we report that the biological neurons that make up visual systems of primates exhibit susceptibility to adversarial perturbations that is comparable in magnitude to existing (robustly trained) artificial neural networks.
APA
Guo, C., Lee, M., Leclerc, G., Dapello, J., Rao, Y., Madry, A. & Dicarlo, J.. (2022). Adversarially trained neural representations are already as robust as biological neural representations. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:8072-8081 Available from https://proceedings.mlr.press/v162/guo22d.html.

Related Material