Computational Complexity of Linear Large Margin Classification With Ramp Loss

Søren Frejstrup Maibing, Christian Igel
Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, PMLR 38:259-267, 2015.

Abstract

Minimizing the binary classification error with a linear model leads to an NP-hard problem. In practice, surrogate loss functions are used, in particular loss functions leading to large margin classification such as the hinge loss and the ramp loss. The intuitive large margin concept is theoretically supported by generalization bounds linking the expected classification error to the empirical margin error and the complexity of the considered hypotheses class. This article addresses the fundamental question about the computational complexity of determining whether there is a hypotheses class with a hypothesis such that the upper bound on the generalization error is below a certain value. Results of this type are important for model comparison and selection. This paper takes a first step and proves that minimizing a basic margin-bound is NP-hard when considering linear hypotheses and the rho-margin loss function, which generalizes the ramp loss. This result directly implies the hardness of ramp loss minimization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v38-frejstrupmaibing15, title = {{Computational Complexity of Linear Large Margin Classification With Ramp Loss}}, author = {Frejstrup Maibing, Søren and Igel, Christian}, booktitle = {Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics}, pages = {259--267}, year = {2015}, editor = {Lebanon, Guy and Vishwanathan, S. V. N.}, volume = {38}, series = {Proceedings of Machine Learning Research}, address = {San Diego, California, USA}, month = {09--12 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v38/frejstrupmaibing15.pdf}, url = {http://proceedings.mlr.press/v38/frejstrupmaibing15.html}, abstract = {Minimizing the binary classification error with a linear model leads to an NP-hard problem. In practice, surrogate loss functions are used, in particular loss functions leading to large margin classification such as the hinge loss and the ramp loss. The intuitive large margin concept is theoretically supported by generalization bounds linking the expected classification error to the empirical margin error and the complexity of the considered hypotheses class. This article addresses the fundamental question about the computational complexity of determining whether there is a hypotheses class with a hypothesis such that the upper bound on the generalization error is below a certain value. Results of this type are important for model comparison and selection. This paper takes a first step and proves that minimizing a basic margin-bound is NP-hard when considering linear hypotheses and the rho-margin loss function, which generalizes the ramp loss. This result directly implies the hardness of ramp loss minimization.} }
Endnote
%0 Conference Paper %T Computational Complexity of Linear Large Margin Classification With Ramp Loss %A Søren Frejstrup Maibing %A Christian Igel %B Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2015 %E Guy Lebanon %E S. V. N. Vishwanathan %F pmlr-v38-frejstrupmaibing15 %I PMLR %P 259--267 %U http://proceedings.mlr.press/v38/frejstrupmaibing15.html %V 38 %X Minimizing the binary classification error with a linear model leads to an NP-hard problem. In practice, surrogate loss functions are used, in particular loss functions leading to large margin classification such as the hinge loss and the ramp loss. The intuitive large margin concept is theoretically supported by generalization bounds linking the expected classification error to the empirical margin error and the complexity of the considered hypotheses class. This article addresses the fundamental question about the computational complexity of determining whether there is a hypotheses class with a hypothesis such that the upper bound on the generalization error is below a certain value. Results of this type are important for model comparison and selection. This paper takes a first step and proves that minimizing a basic margin-bound is NP-hard when considering linear hypotheses and the rho-margin loss function, which generalizes the ramp loss. This result directly implies the hardness of ramp loss minimization.
RIS
TY - CPAPER TI - Computational Complexity of Linear Large Margin Classification With Ramp Loss AU - Søren Frejstrup Maibing AU - Christian Igel BT - Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics DA - 2015/02/21 ED - Guy Lebanon ED - S. V. N. Vishwanathan ID - pmlr-v38-frejstrupmaibing15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 38 SP - 259 EP - 267 L1 - http://proceedings.mlr.press/v38/frejstrupmaibing15.pdf UR - http://proceedings.mlr.press/v38/frejstrupmaibing15.html AB - Minimizing the binary classification error with a linear model leads to an NP-hard problem. In practice, surrogate loss functions are used, in particular loss functions leading to large margin classification such as the hinge loss and the ramp loss. The intuitive large margin concept is theoretically supported by generalization bounds linking the expected classification error to the empirical margin error and the complexity of the considered hypotheses class. This article addresses the fundamental question about the computational complexity of determining whether there is a hypotheses class with a hypothesis such that the upper bound on the generalization error is below a certain value. Results of this type are important for model comparison and selection. This paper takes a first step and proves that minimizing a basic margin-bound is NP-hard when considering linear hypotheses and the rho-margin loss function, which generalizes the ramp loss. This result directly implies the hardness of ramp loss minimization. ER -
APA
Frejstrup Maibing, S. & Igel, C.. (2015). Computational Complexity of Linear Large Margin Classification With Ramp Loss. Proceedings of the Eighteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 38:259-267 Available from http://proceedings.mlr.press/v38/frejstrupmaibing15.html.

Related Material