Proceedings of the Asian Conference on Machine Learning, PMLR 20:97-112, 2011.
Abstract
In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
@InProceedings{pmlr-v20-biggio11,
title = {Support Vector Machines Under Adversarial Label Noise},
author = {Battista Biggio and Blaine Nelson and Pavel Laskov},
booktitle = {Proceedings of the Asian Conference on Machine Learning},
pages = {97--112},
year = {2011},
editor = {Chun-Nan Hsu and Wee Sun Lee},
volume = {20},
series = {Proceedings of Machine Learning Research},
address = {South Garden Hotels and Resorts, Taoyuan, Taiwain},
month = {14--15 Nov},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v20/biggio11/biggio11.pdf},
url = {http://proceedings.mlr.press/v20/biggio11.html},
abstract = {In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.}
}
%0 Conference Paper
%T Support Vector Machines Under Adversarial Label Noise
%A Battista Biggio
%A Blaine Nelson
%A Pavel Laskov
%B Proceedings of the Asian Conference on Machine Learning
%C Proceedings of Machine Learning Research
%D 2011
%E Chun-Nan Hsu
%E Wee Sun Lee
%F pmlr-v20-biggio11
%I PMLR
%J Proceedings of Machine Learning Research
%P 97--112
%U http://proceedings.mlr.press
%V 20
%W PMLR
%X In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
TY - CPAPER
TI - Support Vector Machines Under Adversarial Label Noise
AU - Battista Biggio
AU - Blaine Nelson
AU - Pavel Laskov
BT - Proceedings of the Asian Conference on Machine Learning
PY - 2011/11/17
DA - 2011/11/17
ED - Chun-Nan Hsu
ED - Wee Sun Lee
ID - pmlr-v20-biggio11
PB - PMLR
SP - 97
DP - PMLR
EP - 112
L1 - http://proceedings.mlr.press/v20/biggio11/biggio11.pdf
UR - http://proceedings.mlr.press/v20/biggio11.html
AB - In adversarial classification tasks like spam filtering and intrusion detection, malicious adversaries may manipulate data to thwart the outcome of an automatic analysis. Thus, besides achieving good classification performances, machine learning algorithms have to be robust against adversarial data manipulation to successfully operate in these tasks. While support vector machines (SVMs) have shown to be a very successful approach in classification problems, their effectiveness in adversarial classification tasks has not been extensively investigated yet. In this paper we present a preliminary investigation of the robustness of SVMs against adversarial data manipulation. In particular, we assume that the adversary has control over some training data, and aims to subvert the SVM learning process. Within this assumption, we show that this is indeed possible, and propose a strategy to improve the robustness of SVMs to training data manipulation based on a simple kernel matrix correction.
ER -
Biggio, B., Nelson, B. & Laskov, P.. (2011). Support Vector Machines Under Adversarial Label Noise. Proceedings of the Asian Conference on Machine Learning, in PMLR 20:97-112
This site last compiled Mon, 16 Jul 2018 07:35:01 +0000