Efficient variable selection in support vector machines via the alternating direction method of multipliers

Gui–Bo Ye, Yifei Chen, Xiaohui Xie
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:832-840, 2011.

Abstract

The support vector machine (SVM) is a widely used tool for classification. Although commonly understood as a method of finding the maximum-margin hyperplane, it can also be formulated as a regularized function estimation problem, corresponding to a hinge loss function plus an $\ell_2$-norm regulation term. The doubly regularized support vector machine (DrSVM) is a variant of the standard SVM, which introduces an additional $\ell_1$-norm regularization term on the fitted coefficients. The combined $\ell_1$ and $\ell_2$ regularization, termed elastic net penalty, has the interesting property of achieving simultaneous variable selection and margin-maximization within a single framework. However, because of the nonsmoothness of both the loss function and the regularization term, there is no efficient method to solve DrSVM for large scale problems. Here we develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) to solve the optimization problem in DrSVM. The utility of the method is further illustrated using both simulated and real-world datasets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-ye11a, title = {Efficient variable selection in support vector machines via the alternating direction method of multipliers}, author = {Ye, Gui–Bo and Chen, Yifei and Xie, Xiaohui}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {832--840}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/ye11a/ye11a.pdf}, url = {https://proceedings.mlr.press/v15/ye11a.html}, abstract = {The support vector machine (SVM) is a widely used tool for classification. Although commonly understood as a method of finding the maximum-margin hyperplane, it can also be formulated as a regularized function estimation problem, corresponding to a hinge loss function plus an $\ell_2$-norm regulation term. The doubly regularized support vector machine (DrSVM) is a variant of the standard SVM, which introduces an additional $\ell_1$-norm regularization term on the fitted coefficients. The combined $\ell_1$ and $\ell_2$ regularization, termed elastic net penalty, has the interesting property of achieving simultaneous variable selection and margin-maximization within a single framework. However, because of the nonsmoothness of both the loss function and the regularization term, there is no efficient method to solve DrSVM for large scale problems. Here we develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) to solve the optimization problem in DrSVM. The utility of the method is further illustrated using both simulated and real-world datasets.} }
Endnote
%0 Conference Paper %T Efficient variable selection in support vector machines via the alternating direction method of multipliers %A Gui–Bo Ye %A Yifei Chen %A Xiaohui Xie %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-ye11a %I PMLR %P 832--840 %U https://proceedings.mlr.press/v15/ye11a.html %V 15 %X The support vector machine (SVM) is a widely used tool for classification. Although commonly understood as a method of finding the maximum-margin hyperplane, it can also be formulated as a regularized function estimation problem, corresponding to a hinge loss function plus an $\ell_2$-norm regulation term. The doubly regularized support vector machine (DrSVM) is a variant of the standard SVM, which introduces an additional $\ell_1$-norm regularization term on the fitted coefficients. The combined $\ell_1$ and $\ell_2$ regularization, termed elastic net penalty, has the interesting property of achieving simultaneous variable selection and margin-maximization within a single framework. However, because of the nonsmoothness of both the loss function and the regularization term, there is no efficient method to solve DrSVM for large scale problems. Here we develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) to solve the optimization problem in DrSVM. The utility of the method is further illustrated using both simulated and real-world datasets.
RIS
TY - CPAPER TI - Efficient variable selection in support vector machines via the alternating direction method of multipliers AU - Gui–Bo Ye AU - Yifei Chen AU - Xiaohui Xie BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-ye11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 832 EP - 840 L1 - http://proceedings.mlr.press/v15/ye11a/ye11a.pdf UR - https://proceedings.mlr.press/v15/ye11a.html AB - The support vector machine (SVM) is a widely used tool for classification. Although commonly understood as a method of finding the maximum-margin hyperplane, it can also be formulated as a regularized function estimation problem, corresponding to a hinge loss function plus an $\ell_2$-norm regulation term. The doubly regularized support vector machine (DrSVM) is a variant of the standard SVM, which introduces an additional $\ell_1$-norm regularization term on the fitted coefficients. The combined $\ell_1$ and $\ell_2$ regularization, termed elastic net penalty, has the interesting property of achieving simultaneous variable selection and margin-maximization within a single framework. However, because of the nonsmoothness of both the loss function and the regularization term, there is no efficient method to solve DrSVM for large scale problems. Here we develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) to solve the optimization problem in DrSVM. The utility of the method is further illustrated using both simulated and real-world datasets. ER -
APA
Ye, G., Chen, Y. & Xie, X.. (2011). Efficient variable selection in support vector machines via the alternating direction method of multipliers. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:832-840 Available from https://proceedings.mlr.press/v15/ye11a.html.

Related Material