Adversarially Robust Learning with Unknown Perturbation Sets

Omar Montasser, Steve Hanneke, Nathan Srebro
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:3452-3482, 2021.

Abstract

We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions. We obtain upper bounds on the sample complexity and upper and lower bounds on the number of required interactions, or number of successful attacks, in different interaction models, in terms of the VC and Littlestone dimensions of the hypothesis class of predictors, and without any assumptions on the perturbation set.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-montasser21a, title = {Adversarially Robust Learning with Unknown Perturbation Sets}, author = {Montasser, Omar and Hanneke, Steve and Srebro, Nathan}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {3452--3482}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/montasser21a/montasser21a.pdf}, url = {https://proceedings.mlr.press/v134/montasser21a.html}, abstract = {We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions. We obtain upper bounds on the sample complexity and upper and lower bounds on the number of required interactions, or number of successful attacks, in different interaction models, in terms of the VC and Littlestone dimensions of the hypothesis class of predictors, and without any assumptions on the perturbation set.} }
Endnote
%0 Conference Paper %T Adversarially Robust Learning with Unknown Perturbation Sets %A Omar Montasser %A Steve Hanneke %A Nathan Srebro %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-montasser21a %I PMLR %P 3452--3482 %U https://proceedings.mlr.press/v134/montasser21a.html %V 134 %X We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions. We obtain upper bounds on the sample complexity and upper and lower bounds on the number of required interactions, or number of successful attacks, in different interaction models, in terms of the VC and Littlestone dimensions of the hypothesis class of predictors, and without any assumptions on the perturbation set.
APA
Montasser, O., Hanneke, S. & Srebro, N.. (2021). Adversarially Robust Learning with Unknown Perturbation Sets. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:3452-3482 Available from https://proceedings.mlr.press/v134/montasser21a.html.

Related Material