Robust learning under clean-label attack

Avrim Blum, Steve Hanneke, Jian Qian, Han Shao
Proceedings of Thirty Fourth Conference on Learning Theory, PMLR 134:591-634, 2021.

Abstract

We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) \emph{correctly-labeled} examples to the training set to fool the algorithm into making mistakes on \emph{specific} test instances at test time. The learning goal is to minimize the attackable rate (the probability mass of attackable test instances), which is more difficult than optimal PAC learning. As we show, any robust algorithm with diminishing attackable rate can achieve the optimal dependence on $\epsilon$ in its PAC sample complexity, i.e., $O(1/\epsilon)$. On the other hand, the attackable rate might be large even for some optimal PAC learners, e.g., SVM for linear classifiers. Furthermore, we show that the class of linear hypotheses is not robustly learnable when the data distribution has zero margin and is robustly learnable in the case of positive margin but requires sample complexity exponential in the dimension. For a general hypothesis class with bounded VC dimension, if the attacker is limited to add at most $t=O(1/\epsilon)$ poison examples, the optimal robust learning sample complexity grows linearly with $t$.

Cite this Paper


BibTeX
@InProceedings{pmlr-v134-blum21a, title = {Robust learning under clean-label attack}, author = {Blum, Avrim and Hanneke, Steve and Qian, Jian and Shao, Han}, booktitle = {Proceedings of Thirty Fourth Conference on Learning Theory}, pages = {591--634}, year = {2021}, editor = {Belkin, Mikhail and Kpotufe, Samory}, volume = {134}, series = {Proceedings of Machine Learning Research}, month = {15--19 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v134/blum21a/blum21a.pdf}, url = {https://proceedings.mlr.press/v134/blum21a.html}, abstract = {We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) \emph{correctly-labeled} examples to the training set to fool the algorithm into making mistakes on \emph{specific} test instances at test time. The learning goal is to minimize the attackable rate (the probability mass of attackable test instances), which is more difficult than optimal PAC learning. As we show, any robust algorithm with diminishing attackable rate can achieve the optimal dependence on $\epsilon$ in its PAC sample complexity, i.e., $O(1/\epsilon)$. On the other hand, the attackable rate might be large even for some optimal PAC learners, e.g., SVM for linear classifiers. Furthermore, we show that the class of linear hypotheses is not robustly learnable when the data distribution has zero margin and is robustly learnable in the case of positive margin but requires sample complexity exponential in the dimension. For a general hypothesis class with bounded VC dimension, if the attacker is limited to add at most $t=O(1/\epsilon)$ poison examples, the optimal robust learning sample complexity grows linearly with $t$.} }
Endnote
%0 Conference Paper %T Robust learning under clean-label attack %A Avrim Blum %A Steve Hanneke %A Jian Qian %A Han Shao %B Proceedings of Thirty Fourth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2021 %E Mikhail Belkin %E Samory Kpotufe %F pmlr-v134-blum21a %I PMLR %P 591--634 %U https://proceedings.mlr.press/v134/blum21a.html %V 134 %X We study the problem of robust learning under clean-label data-poisoning attacks, where the attacker injects (an arbitrary set of) \emph{correctly-labeled} examples to the training set to fool the algorithm into making mistakes on \emph{specific} test instances at test time. The learning goal is to minimize the attackable rate (the probability mass of attackable test instances), which is more difficult than optimal PAC learning. As we show, any robust algorithm with diminishing attackable rate can achieve the optimal dependence on $\epsilon$ in its PAC sample complexity, i.e., $O(1/\epsilon)$. On the other hand, the attackable rate might be large even for some optimal PAC learners, e.g., SVM for linear classifiers. Furthermore, we show that the class of linear hypotheses is not robustly learnable when the data distribution has zero margin and is robustly learnable in the case of positive margin but requires sample complexity exponential in the dimension. For a general hypothesis class with bounded VC dimension, if the attacker is limited to add at most $t=O(1/\epsilon)$ poison examples, the optimal robust learning sample complexity grows linearly with $t$.
APA
Blum, A., Hanneke, S., Qian, J. & Shao, H.. (2021). Robust learning under clean-label attack. Proceedings of Thirty Fourth Conference on Learning Theory, in Proceedings of Machine Learning Research 134:591-634 Available from https://proceedings.mlr.press/v134/blum21a.html.

Related Material