Enhancing Robustness of Neural Networks through Fourier Stabilization

Netanel Raviv, Aidan Kelley, Minzhe Guo, Yevgeniy Vorobeychik
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:8880-8889, 2021.

Abstract

Despite the considerable success of neural networks in security settings such as malware detection, such models have proved vulnerable to evasion attacks, in which attackers make slight changes to inputs (e.g., malware) to bypass detection. We propose a novel approach, Fourier stabilization, for designing evasion-robust neural networks with binary inputs. This approach, which is complementary to other forms of defense, replaces the weights of individual neurons with robust analogs derived using Fourier analytic tools. The choice of which neurons to stabilize in a neural network is then a combinatorial optimization problem, and we propose several methods for approximately solving it. We provide a formal bound on the per-neuron drop in accuracy due to Fourier stabilization, and experimentally demonstrate the effectiveness of the proposed approach in boosting robustness of neural networks in several detection settings. Moreover, we show that our approach effectively composes with adversarial training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-raviv21a, title = {Enhancing Robustness of Neural Networks through Fourier Stabilization}, author = {Raviv, Netanel and Kelley, Aidan and Guo, Minzhe and Vorobeychik, Yevgeniy}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8880--8889}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/raviv21a/raviv21a.pdf}, url = {https://proceedings.mlr.press/v139/raviv21a.html}, abstract = {Despite the considerable success of neural networks in security settings such as malware detection, such models have proved vulnerable to evasion attacks, in which attackers make slight changes to inputs (e.g., malware) to bypass detection. We propose a novel approach, Fourier stabilization, for designing evasion-robust neural networks with binary inputs. This approach, which is complementary to other forms of defense, replaces the weights of individual neurons with robust analogs derived using Fourier analytic tools. The choice of which neurons to stabilize in a neural network is then a combinatorial optimization problem, and we propose several methods for approximately solving it. We provide a formal bound on the per-neuron drop in accuracy due to Fourier stabilization, and experimentally demonstrate the effectiveness of the proposed approach in boosting robustness of neural networks in several detection settings. Moreover, we show that our approach effectively composes with adversarial training.} }
Endnote
%0 Conference Paper %T Enhancing Robustness of Neural Networks through Fourier Stabilization %A Netanel Raviv %A Aidan Kelley %A Minzhe Guo %A Yevgeniy Vorobeychik %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-raviv21a %I PMLR %P 8880--8889 %U https://proceedings.mlr.press/v139/raviv21a.html %V 139 %X Despite the considerable success of neural networks in security settings such as malware detection, such models have proved vulnerable to evasion attacks, in which attackers make slight changes to inputs (e.g., malware) to bypass detection. We propose a novel approach, Fourier stabilization, for designing evasion-robust neural networks with binary inputs. This approach, which is complementary to other forms of defense, replaces the weights of individual neurons with robust analogs derived using Fourier analytic tools. The choice of which neurons to stabilize in a neural network is then a combinatorial optimization problem, and we propose several methods for approximately solving it. We provide a formal bound on the per-neuron drop in accuracy due to Fourier stabilization, and experimentally demonstrate the effectiveness of the proposed approach in boosting robustness of neural networks in several detection settings. Moreover, we show that our approach effectively composes with adversarial training.
APA
Raviv, N., Kelley, A., Guo, M. & Vorobeychik, Y.. (2021). Enhancing Robustness of Neural Networks through Fourier Stabilization. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:8880-8889 Available from https://proceedings.mlr.press/v139/raviv21a.html.

Related Material