Transductive Robust Learning Guarantees

Omar Montasser, Steve Hanneke, Nathan Srebro
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:11461-11471, 2022.

Abstract

We study the problem of adversarially robust learning in the transductive setting. For classes H of bounded VC dimension, we propose a simple transductive learner that when presented with a set of labeled training examples and a set of unlabeled test examples (both sets possibly adversarially perturbed), it correctly labels the test examples with a robust error rate that is linear in the VC dimension and is adaptive to the complexity of the perturbation set. This result provides an exponential improvement in dependence on VC dimension over the best known upper bound on the robust error in the inductive setting, at the expense of competing with a more restrictive notion of optimal robust error.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-montasser22a, title = { Transductive Robust Learning Guarantees }, author = {Montasser, Omar and Hanneke, Steve and Srebro, Nathan}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {11461--11471}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/montasser22a/montasser22a.pdf}, url = {https://proceedings.mlr.press/v151/montasser22a.html}, abstract = { We study the problem of adversarially robust learning in the transductive setting. For classes H of bounded VC dimension, we propose a simple transductive learner that when presented with a set of labeled training examples and a set of unlabeled test examples (both sets possibly adversarially perturbed), it correctly labels the test examples with a robust error rate that is linear in the VC dimension and is adaptive to the complexity of the perturbation set. This result provides an exponential improvement in dependence on VC dimension over the best known upper bound on the robust error in the inductive setting, at the expense of competing with a more restrictive notion of optimal robust error. } }
Endnote
%0 Conference Paper %T Transductive Robust Learning Guarantees %A Omar Montasser %A Steve Hanneke %A Nathan Srebro %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-montasser22a %I PMLR %P 11461--11471 %U https://proceedings.mlr.press/v151/montasser22a.html %V 151 %X We study the problem of adversarially robust learning in the transductive setting. For classes H of bounded VC dimension, we propose a simple transductive learner that when presented with a set of labeled training examples and a set of unlabeled test examples (both sets possibly adversarially perturbed), it correctly labels the test examples with a robust error rate that is linear in the VC dimension and is adaptive to the complexity of the perturbation set. This result provides an exponential improvement in dependence on VC dimension over the best known upper bound on the robust error in the inductive setting, at the expense of competing with a more restrictive notion of optimal robust error.
APA
Montasser, O., Hanneke, S. & Srebro, N.. (2022). Transductive Robust Learning Guarantees . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:11461-11471 Available from https://proceedings.mlr.press/v151/montasser22a.html.

Related Material