Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks

Pranjal Awasthi, Natalie Frank, Mehryar Mohri
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:431-441, 2020.

Abstract

Adversarial or test time robustness measures the susceptibility of a classifier to perturbations to the test input. While there has been a flurry of recent work on designing defenses against such perturbations, the theory of adversarial robustness is not well understood. In order to make progress on this, we focus on the problem of understanding generalization in adversarial settings, via the lens of Rademacher complexity. We give upper and lower bounds for the adversarial empirical Rademacher complexity of linear hypotheses with adversarial perturbations measured in $l_r$-norm for an arbitrary $r \geq 1$. We then extend our analysis to provide Rademacher complexity lower and upper bounds for a single ReLU unit. Finally, we give adversarial Rademacher complexity bounds for feed-forward neural networks with one hidden layer.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-awasthi20a, title = {Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks}, author = {Awasthi, Pranjal and Frank, Natalie and Mohri, Mehryar}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {431--441}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/awasthi20a/awasthi20a.pdf}, url = {http://proceedings.mlr.press/v119/awasthi20a.html}, abstract = {Adversarial or test time robustness measures the susceptibility of a classifier to perturbations to the test input. While there has been a flurry of recent work on designing defenses against such perturbations, the theory of adversarial robustness is not well understood. In order to make progress on this, we focus on the problem of understanding generalization in adversarial settings, via the lens of Rademacher complexity. We give upper and lower bounds for the adversarial empirical Rademacher complexity of linear hypotheses with adversarial perturbations measured in $l_r$-norm for an arbitrary $r \geq 1$. We then extend our analysis to provide Rademacher complexity lower and upper bounds for a single ReLU unit. Finally, we give adversarial Rademacher complexity bounds for feed-forward neural networks with one hidden layer.} }
Endnote
%0 Conference Paper %T Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks %A Pranjal Awasthi %A Natalie Frank %A Mehryar Mohri %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-awasthi20a %I PMLR %P 431--441 %U http://proceedings.mlr.press/v119/awasthi20a.html %V 119 %X Adversarial or test time robustness measures the susceptibility of a classifier to perturbations to the test input. While there has been a flurry of recent work on designing defenses against such perturbations, the theory of adversarial robustness is not well understood. In order to make progress on this, we focus on the problem of understanding generalization in adversarial settings, via the lens of Rademacher complexity. We give upper and lower bounds for the adversarial empirical Rademacher complexity of linear hypotheses with adversarial perturbations measured in $l_r$-norm for an arbitrary $r \geq 1$. We then extend our analysis to provide Rademacher complexity lower and upper bounds for a single ReLU unit. Finally, we give adversarial Rademacher complexity bounds for feed-forward neural networks with one hidden layer.
APA
Awasthi, P., Frank, N. & Mohri, M.. (2020). Adversarial Learning Guarantees for Linear Hypotheses and Neural Networks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:431-441 Available from http://proceedings.mlr.press/v119/awasthi20a.html.

Related Material