Neyman-Pearson classification under a strict constraint

Philippe Rigollet, Xin Tong
Proceedings of the 24th Annual Conference on Learning Theory, PMLR 19:595-614, 2011.

Abstract

Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i), its probability of type I error is below a pre-specified level and (ii), it has probability of type  II error close to the minimum possible. The proposed classifier is obtained by minimizing an empirical objective subject to an empirical constraint. The novelty of the method is that the classifier output by this problem is shown to satisfy the original constraint on type I error. This strict enforcement of the constraint has interesting consequences on the control of the type II error and we develop new techniques to handle this situation. Finally, connections with chance constrained optimization are evident and are investigated.

Cite this Paper


BibTeX
@InProceedings{pmlr-v19-rigollet11a, title = {Neyman-Pearson classification under a strict constraint}, author = {Rigollet, Philippe and Tong, Xin}, booktitle = {Proceedings of the 24th Annual Conference on Learning Theory}, pages = {595--614}, year = {2011}, editor = {Kakade, Sham M. and von Luxburg, Ulrike}, volume = {19}, series = {Proceedings of Machine Learning Research}, address = {Budapest, Hungary}, month = {09--11 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v19/rigollet11a/rigollet11a.pdf}, url = {https://proceedings.mlr.press/v19/rigollet11a.html}, abstract = {Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i), its probability of type I error is below a pre-specified level and (ii), it has probability of type  II error close to the minimum possible. The proposed classifier is obtained by minimizing an empirical objective subject to an empirical constraint. The novelty of the method is that the classifier output by this problem is shown to satisfy the original constraint on type I error. This strict enforcement of the constraint has interesting consequences on the control of the type II error and we develop new techniques to handle this situation. Finally, connections with chance constrained optimization are evident and are investigated.} }
Endnote
%0 Conference Paper %T Neyman-Pearson classification under a strict constraint %A Philippe Rigollet %A Xin Tong %B Proceedings of the 24th Annual Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2011 %E Sham M. Kakade %E Ulrike von Luxburg %F pmlr-v19-rigollet11a %I PMLR %P 595--614 %U https://proceedings.mlr.press/v19/rigollet11a.html %V 19 %X Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i), its probability of type I error is below a pre-specified level and (ii), it has probability of type  II error close to the minimum possible. The proposed classifier is obtained by minimizing an empirical objective subject to an empirical constraint. The novelty of the method is that the classifier output by this problem is shown to satisfy the original constraint on type I error. This strict enforcement of the constraint has interesting consequences on the control of the type II error and we develop new techniques to handle this situation. Finally, connections with chance constrained optimization are evident and are investigated.
RIS
TY - CPAPER TI - Neyman-Pearson classification under a strict constraint AU - Philippe Rigollet AU - Xin Tong BT - Proceedings of the 24th Annual Conference on Learning Theory DA - 2011/12/21 ED - Sham M. Kakade ED - Ulrike von Luxburg ID - pmlr-v19-rigollet11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 19 SP - 595 EP - 614 L1 - http://proceedings.mlr.press/v19/rigollet11a/rigollet11a.pdf UR - https://proceedings.mlr.press/v19/rigollet11a.html AB - Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i), its probability of type I error is below a pre-specified level and (ii), it has probability of type  II error close to the minimum possible. The proposed classifier is obtained by minimizing an empirical objective subject to an empirical constraint. The novelty of the method is that the classifier output by this problem is shown to satisfy the original constraint on type I error. This strict enforcement of the constraint has interesting consequences on the control of the type II error and we develop new techniques to handle this situation. Finally, connections with chance constrained optimization are evident and are investigated. ER -
APA
Rigollet, P. & Tong, X.. (2011). Neyman-Pearson classification under a strict constraint. Proceedings of the 24th Annual Conference on Learning Theory, in Proceedings of Machine Learning Research 19:595-614 Available from https://proceedings.mlr.press/v19/rigollet11a.html.

Related Material