When are Non-Parametric Methods Robust?

Robi Bhattacharjee, Kamalika Chaudhuri
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:832-841, 2020.

Abstract

A growing body of research has shown that many classifiers are susceptible to adversarial examples – small strategic modifications to test inputs that lead to misclassification. In this work, we study general non-parametric methods, with a view towards understanding when they are robust to these modifications. We establish general conditions under which non-parametric methods are r-consistent – in the sense that they converge to optimally robust and accurate classifiers in the large sample limit. Concretely, our results show that when data is well-separated, nearest neighbors and kernel classifiers are r-consistent, while histograms are not. For general data distributions, we prove that preprocessing by Adversarial Pruning (Yang et. al., 2019)– that makes data well-separated – followed by nearest neighbors or kernel classifiers also leads to r-consistency.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-bhattacharjee20a, title = {When are Non-Parametric Methods Robust?}, author = {Bhattacharjee, Robi and Chaudhuri, Kamalika}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {832--841}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/bhattacharjee20a/bhattacharjee20a.pdf}, url = {https://proceedings.mlr.press/v119/bhattacharjee20a.html}, abstract = {A growing body of research has shown that many classifiers are susceptible to adversarial examples – small strategic modifications to test inputs that lead to misclassification. In this work, we study general non-parametric methods, with a view towards understanding when they are robust to these modifications. We establish general conditions under which non-parametric methods are r-consistent – in the sense that they converge to optimally robust and accurate classifiers in the large sample limit. Concretely, our results show that when data is well-separated, nearest neighbors and kernel classifiers are r-consistent, while histograms are not. For general data distributions, we prove that preprocessing by Adversarial Pruning (Yang et. al., 2019)– that makes data well-separated – followed by nearest neighbors or kernel classifiers also leads to r-consistency.} }
Endnote
%0 Conference Paper %T When are Non-Parametric Methods Robust? %A Robi Bhattacharjee %A Kamalika Chaudhuri %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-bhattacharjee20a %I PMLR %P 832--841 %U https://proceedings.mlr.press/v119/bhattacharjee20a.html %V 119 %X A growing body of research has shown that many classifiers are susceptible to adversarial examples – small strategic modifications to test inputs that lead to misclassification. In this work, we study general non-parametric methods, with a view towards understanding when they are robust to these modifications. We establish general conditions under which non-parametric methods are r-consistent – in the sense that they converge to optimally robust and accurate classifiers in the large sample limit. Concretely, our results show that when data is well-separated, nearest neighbors and kernel classifiers are r-consistent, while histograms are not. For general data distributions, we prove that preprocessing by Adversarial Pruning (Yang et. al., 2019)– that makes data well-separated – followed by nearest neighbors or kernel classifiers also leads to r-consistency.
APA
Bhattacharjee, R. & Chaudhuri, K.. (2020). When are Non-Parametric Methods Robust?. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:832-841 Available from https://proceedings.mlr.press/v119/bhattacharjee20a.html.

Related Material