Random Projections for Support Vector Machines

Saurabh Paul, Christos Boutsidis, Malik Magdon-Ismail, Petros Drineas
Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, PMLR 31:498-506, 2013.

Abstract

Let X be a data matrix of rank ρ, representing n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within ε-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory.

Cite this Paper


BibTeX
@InProceedings{pmlr-v31-paul13a, title = {Random Projections for Support Vector Machines}, author = {Paul, Saurabh and Boutsidis, Christos and Magdon-Ismail, Malik and Drineas, Petros}, booktitle = {Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics}, pages = {498--506}, year = {2013}, editor = {Carvalho, Carlos M. and Ravikumar, Pradeep}, volume = {31}, series = {Proceedings of Machine Learning Research}, address = {Scottsdale, Arizona, USA}, month = {29 Apr--01 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v31/paul13a.pdf}, url = {https://proceedings.mlr.press/v31/paul13a.html}, abstract = {Let X be a data matrix of rank ρ, representing n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within ε-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory.} }
Endnote
%0 Conference Paper %T Random Projections for Support Vector Machines %A Saurabh Paul %A Christos Boutsidis %A Malik Magdon-Ismail %A Petros Drineas %B Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2013 %E Carlos M. Carvalho %E Pradeep Ravikumar %F pmlr-v31-paul13a %I PMLR %P 498--506 %U https://proceedings.mlr.press/v31/paul13a.html %V 31 %X Let X be a data matrix of rank ρ, representing n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within ε-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory.
RIS
TY - CPAPER TI - Random Projections for Support Vector Machines AU - Saurabh Paul AU - Christos Boutsidis AU - Malik Magdon-Ismail AU - Petros Drineas BT - Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics DA - 2013/04/29 ED - Carlos M. Carvalho ED - Pradeep Ravikumar ID - pmlr-v31-paul13a PB - PMLR DP - Proceedings of Machine Learning Research VL - 31 SP - 498 EP - 506 L1 - http://proceedings.mlr.press/v31/paul13a.pdf UR - https://proceedings.mlr.press/v31/paul13a.html AB - Let X be a data matrix of rank ρ, representing n points in d-dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique which is precomputed and can be applied to any input matrix X. We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within ε-relative error, ensuring comparable generalization as in the original space. We present extensive experiments with real and synthetic data to support our theory. ER -
APA
Paul, S., Boutsidis, C., Magdon-Ismail, M. & Drineas, P.. (2013). Random Projections for Support Vector Machines. Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 31:498-506 Available from https://proceedings.mlr.press/v31/paul13a.html.

Related Material