Adversarial Risk Bounds through Sparsity based Compression

Emilio Balda, Niklas Koep, Arash Behboodi, Rudolf Mathar
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:3816-3825, 2020.

Abstract

Neural networks have been shown to be vulnerable against minor adversarial perturbations of their inputs, especially for high dimensional data under $\ell_\infty$ attacks.To combat this problem, techniques like adversarial training have been employed to obtain models that are robust on the training set.However, the robustness of such models against adversarial perturbations may not generalize to unseen data.To study how robustness generalizes, recent works assume that the inputs have bounded $\ell_2$-norm in order to bound the adversarial risk for $\ell_\infty$ attacks with no explicit dimension dependence.In this work, we focus on $\ell_\infty$ attacks with $\ell_\infty$ bounded inputs and prove margin-based bounds.Specifically, we use a compression-based approach that relies on efficiently compressing the set of tunable parameters without distorting the adversarial risk. To achieve this, we apply the concept of effective sparsity and effective joint sparsity on the weight matrices of neural networks.This leads to bounds with no explicit dependence on the input dimension, neither on the number of classes.Our results show that neural networks with approximately sparse weight matrices not only enjoy enhanced robustness but also better generalization. Finally, empirical simulations show that the notion of effective joint sparsity plays a significant role in generalizing robustness to $\ell_\infty$ attacks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-balda20a, title = {Adversarial Risk Bounds through Sparsity based Compression}, author = {Balda, Emilio and Koep, Niklas and Behboodi, Arash and Mathar, Rudolf}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {3816--3825}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/balda20a/balda20a.pdf}, url = {https://proceedings.mlr.press/v108/balda20a.html}, abstract = {Neural networks have been shown to be vulnerable against minor adversarial perturbations of their inputs, especially for high dimensional data under $\ell_\infty$ attacks.To combat this problem, techniques like adversarial training have been employed to obtain models that are robust on the training set.However, the robustness of such models against adversarial perturbations may not generalize to unseen data.To study how robustness generalizes, recent works assume that the inputs have bounded $\ell_2$-norm in order to bound the adversarial risk for $\ell_\infty$ attacks with no explicit dimension dependence.In this work, we focus on $\ell_\infty$ attacks with $\ell_\infty$ bounded inputs and prove margin-based bounds.Specifically, we use a compression-based approach that relies on efficiently compressing the set of tunable parameters without distorting the adversarial risk. To achieve this, we apply the concept of effective sparsity and effective joint sparsity on the weight matrices of neural networks.This leads to bounds with no explicit dependence on the input dimension, neither on the number of classes.Our results show that neural networks with approximately sparse weight matrices not only enjoy enhanced robustness but also better generalization. Finally, empirical simulations show that the notion of effective joint sparsity plays a significant role in generalizing robustness to $\ell_\infty$ attacks.} }
Endnote
%0 Conference Paper %T Adversarial Risk Bounds through Sparsity based Compression %A Emilio Balda %A Niklas Koep %A Arash Behboodi %A Rudolf Mathar %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-balda20a %I PMLR %P 3816--3825 %U https://proceedings.mlr.press/v108/balda20a.html %V 108 %X Neural networks have been shown to be vulnerable against minor adversarial perturbations of their inputs, especially for high dimensional data under $\ell_\infty$ attacks.To combat this problem, techniques like adversarial training have been employed to obtain models that are robust on the training set.However, the robustness of such models against adversarial perturbations may not generalize to unseen data.To study how robustness generalizes, recent works assume that the inputs have bounded $\ell_2$-norm in order to bound the adversarial risk for $\ell_\infty$ attacks with no explicit dimension dependence.In this work, we focus on $\ell_\infty$ attacks with $\ell_\infty$ bounded inputs and prove margin-based bounds.Specifically, we use a compression-based approach that relies on efficiently compressing the set of tunable parameters without distorting the adversarial risk. To achieve this, we apply the concept of effective sparsity and effective joint sparsity on the weight matrices of neural networks.This leads to bounds with no explicit dependence on the input dimension, neither on the number of classes.Our results show that neural networks with approximately sparse weight matrices not only enjoy enhanced robustness but also better generalization. Finally, empirical simulations show that the notion of effective joint sparsity plays a significant role in generalizing robustness to $\ell_\infty$ attacks.
APA
Balda, E., Koep, N., Behboodi, A. & Mathar, R.. (2020). Adversarial Risk Bounds through Sparsity based Compression. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:3816-3825 Available from https://proceedings.mlr.press/v108/balda20a.html.

Related Material