[edit]
Convex formulations of radius-margin based Support Vector Machines
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(1):169-177, 2013.
Abstract
We consider Support Vector Machines (SVMs) learned together with linear transformations of the feature spaces on which they are applied. Under this scenario the radius of the smallest data enclosing sphere is no longer fixed. Therefore optimizing the SVM error bound by considering both the radius and the margin has the potential to deliver a tighter error bound. In this paper we present two novel algorithms: R-SVM_μ^+—a SVM radius-margin based feature selection algorithm, and R-SVM^+ — a metric learning-based SVM. We derive our algorithms by exploiting a new tighter approximation of the radius and a metric learning interpretation of SVM. Both optimize directly the radius-margin error bound using linear transformations. Unlike almost all existing radius-margin based SVM algorithms which are either non-convex or combinatorial, our algorithms are standard quadratic convex optimization problems with linear or quadratic constraints. We perform a number of experiments on benchmark datasets. R-SVM_μ^+ exhibits excellent feature selection performance compared to the state-of-the-art feature selection methods, such as L_1-norm and elastic-net based methods. R-SVM^+ achieves a significantly better classification performance compared to SVM and its other state-of-the-art variants. From the results it is clear that the incorporation of the radius, as a means to control the data spread, in the cost function has strong beneficial effects.