Support Vector Machines Under Adversarial Label Noise

Battista Biggio, Blaine Nelson, Pavel Laskov
Proceedings of the Asian Conference on Machine Learning, PMLR 20:97-112, 2011.

Abstract

In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.

Cite this Paper


BibTeX
@InProceedings{pmlr-v20-biggio11, title = {Support Vector Machines Under Adversarial Label Noise}, author = {Biggio, Battista and Nelson, Blaine and Laskov, Pavel}, booktitle = {Proceedings of the Asian Conference on Machine Learning}, pages = {97--112}, year = {2011}, editor = {Hsu, Chun-Nan and Lee, Wee Sun}, volume = {20}, series = {Proceedings of Machine Learning Research}, address = {South Garden Hotels and Resorts, Taoyuan, Taiwain}, month = {14--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v20/biggio11/biggio11.pdf}, url = {https://proceedings.mlr.press/v20/biggio11.html}, abstract = {In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.} }
Endnote
%0 Conference Paper %T Support Vector Machines Under Adversarial Label Noise %A Battista Biggio %A Blaine Nelson %A Pavel Laskov %B Proceedings of the Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2011 %E Chun-Nan Hsu %E Wee Sun Lee %F pmlr-v20-biggio11 %I PMLR %P 97--112 %U https://proceedings.mlr.press/v20/biggio11.html %V 20 %X In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
RIS
TY - CPAPER TI - Support Vector Machines Under Adversarial Label Noise AU - Battista Biggio AU - Blaine Nelson AU - Pavel Laskov BT - Proceedings of the Asian Conference on Machine Learning DA - 2011/11/17 ED - Chun-Nan Hsu ED - Wee Sun Lee ID - pmlr-v20-biggio11 PB - PMLR DP - Proceedings of Machine Learning Research VL - 20 SP - 97 EP - 112 L1 - http://proceedings.mlr.press/v20/biggio11/biggio11.pdf UR - https://proceedings.mlr.press/v20/biggio11.html AB - In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction. ER -
APA
Biggio, B., Nelson, B. & Laskov, P.. (2011). Support Vector Machines Under Adversarial Label Noise. Proceedings of the Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 20:97-112 Available from https://proceedings.mlr.press/v20/biggio11.html.

Related Material