Novelty detection: Unlabeled data definitely help

Clayton Scott, Gilles Blanchard
; Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, PMLR 5:464-471, 2009.

Abstract

In machine learning, one formulation of the novelty detection problem is to build a detector based on a training sample consisting of only nominal data. The standard (inductive) approach to this problem has been to declare novelties where the nominal density is low, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning time. We argue that novelty detection is naturally solved by a general reduction to a binary classification problem. In particular, a detector with a desired false positive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the inductive approach, our approach yields detectors that are optimal (e.g., statistically consistent) regardless of the distribution on novelties. Therefore, in novelty detection, unlabeled data have a substantial impact on the theoretical properties of the decision rule.

Cite this Paper


BibTeX
@InProceedings{pmlr-v5-scott09a, title = {Novelty detection: Unlabeled data definitely help}, author = {Clayton Scott and Gilles Blanchard}, booktitle = {Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics}, pages = {464--471}, year = {2009}, editor = {David van Dyk and Max Welling}, volume = {5}, series = {Proceedings of Machine Learning Research}, address = {Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v5/scott09a/scott09a.pdf}, url = {http://proceedings.mlr.press/v5/scott09a.html}, abstract = {In machine learning, one formulation of the novelty detection problem is to build a detector based on a training sample consisting of only nominal data. The standard (inductive) approach to this problem has been to declare novelties where the nominal density is low, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning time. We argue that novelty detection is naturally solved by a general reduction to a binary classification problem. In particular, a detector with a desired false positive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the inductive approach, our approach yields detectors that are optimal (e.g., statistically consistent) regardless of the distribution on novelties. Therefore, in novelty detection, unlabeled data have a substantial impact on the theoretical properties of the decision rule.} }
Endnote
%0 Conference Paper %T Novelty detection: Unlabeled data definitely help %A Clayton Scott %A Gilles Blanchard %B Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2009 %E David van Dyk %E Max Welling %F pmlr-v5-scott09a %I PMLR %J Proceedings of Machine Learning Research %P 464--471 %U http://proceedings.mlr.press %V 5 %W PMLR %X In machine learning, one formulation of the novelty detection problem is to build a detector based on a training sample consisting of only nominal data. The standard (inductive) approach to this problem has been to declare novelties where the nominal density is low, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning time. We argue that novelty detection is naturally solved by a general reduction to a binary classification problem. In particular, a detector with a desired false positive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the inductive approach, our approach yields detectors that are optimal (e.g., statistically consistent) regardless of the distribution on novelties. Therefore, in novelty detection, unlabeled data have a substantial impact on the theoretical properties of the decision rule.
RIS
TY - CPAPER TI - Novelty detection: Unlabeled data definitely help AU - Clayton Scott AU - Gilles Blanchard BT - Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics PY - 2009/04/15 DA - 2009/04/15 ED - David van Dyk ED - Max Welling ID - pmlr-v5-scott09a PB - PMLR SP - 464 DP - PMLR EP - 471 L1 - http://proceedings.mlr.press/v5/scott09a/scott09a.pdf UR - http://proceedings.mlr.press/v5/scott09a.html AB - In machine learning, one formulation of the novelty detection problem is to build a detector based on a training sample consisting of only nominal data. The standard (inductive) approach to this problem has been to declare novelties where the nominal density is low, which reduces the problem to density level set estimation. In this paper, we consider the setting where an unlabeled and possibly contaminated sample is also available at learning time. We argue that novelty detection is naturally solved by a general reduction to a binary classification problem. In particular, a detector with a desired false positive rate can be achieved through a reduction to Neyman-Pearson classification. Unlike the inductive approach, our approach yields detectors that are optimal (e.g., statistically consistent) regardless of the distribution on novelties. Therefore, in novelty detection, unlabeled data have a substantial impact on the theoretical properties of the decision rule. ER -
APA
Scott, C. & Blanchard, G.. (2009). Novelty detection: Unlabeled data definitely help. Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics, in PMLR 5:464-471

Related Material