[edit]
Adversarially Robust Learning with Unknown Perturbation Sets
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:3452-3482, 2021.
Abstract
We study the problem of learning predictors that are robust to adversarial examples with respect to an unknown perturbation set, relying instead on interaction with an adversarial attacker or access to attack oracles, examining different models for such interactions. We obtain upper bounds on the sample complexity and upper and lower bounds on the number of required interactions, or number of successful attacks, in different interaction models, in terms of the VC and Littlestone dimensions of the hypothesis class of predictors, and without any assumptions on the perturbation set.