Hindering Adversarial Attacks with Implicit Neural Representations

Andrei A Rusu, Dan Andrei Calian, Sven Gowal, Raia Hadsell
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:18910-18934, 2022.

Abstract

We introduce the Lossy Implicit Network Activation Coding (LINAC) defence, an input transformation which successfully hinders several common adversarial attacks on CIFAR-10 classifiers for perturbations up to 8/255 in Linf norm and 0.5 in L2 norm. Implicit neural representations are used to approximately encode pixel colour intensities in 2D images such that classifiers trained on transformed data appear to have robustness to small perturbations without adversarial training or large drops in performance. The seed of the random number generator used to initialise and train the implicit neural representation turns out to be necessary information for stronger generic attacks, suggesting its role as a private key. We devise a Parametric Bypass Approximation (PBA) attack strategy for key-based defences, which successfully invalidates an existing method in this category. Interestingly, our LINAC defence also hinders some transfer and adaptive attacks, including our novel PBA strategy. Our results emphasise the importance of a broad range of customised attacks despite apparent robustness according to standard evaluations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-rusu22a, title = {Hindering Adversarial Attacks with Implicit Neural Representations}, author = {Rusu, Andrei A and Calian, Dan Andrei and Gowal, Sven and Hadsell, Raia}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {18910--18934}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/rusu22a/rusu22a.pdf}, url = {https://proceedings.mlr.press/v162/rusu22a.html}, abstract = {We introduce the Lossy Implicit Network Activation Coding (LINAC) defence, an input transformation which successfully hinders several common adversarial attacks on CIFAR-10 classifiers for perturbations up to 8/255 in Linf norm and 0.5 in L2 norm. Implicit neural representations are used to approximately encode pixel colour intensities in 2D images such that classifiers trained on transformed data appear to have robustness to small perturbations without adversarial training or large drops in performance. The seed of the random number generator used to initialise and train the implicit neural representation turns out to be necessary information for stronger generic attacks, suggesting its role as a private key. We devise a Parametric Bypass Approximation (PBA) attack strategy for key-based defences, which successfully invalidates an existing method in this category. Interestingly, our LINAC defence also hinders some transfer and adaptive attacks, including our novel PBA strategy. Our results emphasise the importance of a broad range of customised attacks despite apparent robustness according to standard evaluations.} }
Endnote
%0 Conference Paper %T Hindering Adversarial Attacks with Implicit Neural Representations %A Andrei A Rusu %A Dan Andrei Calian %A Sven Gowal %A Raia Hadsell %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-rusu22a %I PMLR %P 18910--18934 %U https://proceedings.mlr.press/v162/rusu22a.html %V 162 %X We introduce the Lossy Implicit Network Activation Coding (LINAC) defence, an input transformation which successfully hinders several common adversarial attacks on CIFAR-10 classifiers for perturbations up to 8/255 in Linf norm and 0.5 in L2 norm. Implicit neural representations are used to approximately encode pixel colour intensities in 2D images such that classifiers trained on transformed data appear to have robustness to small perturbations without adversarial training or large drops in performance. The seed of the random number generator used to initialise and train the implicit neural representation turns out to be necessary information for stronger generic attacks, suggesting its role as a private key. We devise a Parametric Bypass Approximation (PBA) attack strategy for key-based defences, which successfully invalidates an existing method in this category. Interestingly, our LINAC defence also hinders some transfer and adaptive attacks, including our novel PBA strategy. Our results emphasise the importance of a broad range of customised attacks despite apparent robustness according to standard evaluations.
APA
Rusu, A.A., Calian, D.A., Gowal, S. & Hadsell, R.. (2022). Hindering Adversarial Attacks with Implicit Neural Representations. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:18910-18934 Available from https://proceedings.mlr.press/v162/rusu22a.html.

Related Material