Near-Optimal Evasion of Convex-Inducing Classifiers

[edit]

Blaine Nelson, Benjamin Rubinstein, Ling Huang, Anthony Joseph, Shing–hon Lau, Steven Lee, Satish Rao, Anthony Tran, Doug Tygar ;
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, PMLR 9:549-556, 2010.

Abstract

Classifiers are often used to detect miscreant activities. We study how an adversary can efficiently query a classifier to elicit information that allows the adversary to evade detection at near-minimal cost. We generalize results of Lowd and Meek (2005) to convex-inducing classifiers. We present algorithms that construct undetected instances of near-minimal cost using only polynomially many queries in the dimension of the space and without reverse engineering the decision boundary.

Related Material