Scaling up Kernel SVM on Limited Resources: A Low-rank Linearization Approach

Kai Zhang, Liang Lan, Zhuang Wang, Fabian Moerchen
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:1425-1434, 2012.

Abstract

Kernel Support Vector Machine delivers state-of-the-art results in non-linear classification, but the need to maintain a large number of support vectors poses a challenge in large scale training and testing. In contrast, linear SVM is much more scalable even on limited computing recourses (e.g. daily life PCs), but the learned model cannot capture non-linear concepts. To scale up kernel SVM on limited resources, we propose a low-rank linearization approach that transforms a non-linear SVM to a linear one via a novel, approximate empirical kernel map computed from efficient low-rank approximation of kernel matrices. We call it LLSVM (Low-rank Linearized SVM). We theoretically study the gap between the solutions of the optimal and approximate kernel map, which is used in turn to provide important guidance on the sampling based kernel approximations. Our algorithm inherits high efficiency of linear SVMs and rich repesentability of kernel classifiers. Evaluation against large-scale linear and kernel SVMs on several truly large data sets shows that the proposed method achieves a better tradeoff between scalability and model representability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v22-zhang12d, title = {Scaling up Kernel SVM on Limited Resources: A Low-rank Linearization Approach}, author = {Zhang, Kai and Lan, Liang and Wang, Zhuang and Moerchen, Fabian}, booktitle = {Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics}, pages = {1425--1434}, year = {2012}, editor = {Lawrence, Neil D. and Girolami, Mark}, volume = {22}, series = {Proceedings of Machine Learning Research}, address = {La Palma, Canary Islands}, month = {21--23 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v22/zhang12d/zhang12d.pdf}, url = {https://proceedings.mlr.press/v22/zhang12d.html}, abstract = {Kernel Support Vector Machine delivers state-of-the-art results in non-linear classification, but the need to maintain a large number of support vectors poses a challenge in large scale training and testing. In contrast, linear SVM is much more scalable even on limited computing recourses (e.g. daily life PCs), but the learned model cannot capture non-linear concepts. To scale up kernel SVM on limited resources, we propose a low-rank linearization approach that transforms a non-linear SVM to a linear one via a novel, approximate empirical kernel map computed from efficient low-rank approximation of kernel matrices. We call it LLSVM (Low-rank Linearized SVM). We theoretically study the gap between the solutions of the optimal and approximate kernel map, which is used in turn to provide important guidance on the sampling based kernel approximations. Our algorithm inherits high efficiency of linear SVMs and rich repesentability of kernel classifiers. Evaluation against large-scale linear and kernel SVMs on several truly large data sets shows that the proposed method achieves a better tradeoff between scalability and model representability.} }
Endnote
%0 Conference Paper %T Scaling up Kernel SVM on Limited Resources: A Low-rank Linearization Approach %A Kai Zhang %A Liang Lan %A Zhuang Wang %A Fabian Moerchen %B Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2012 %E Neil D. Lawrence %E Mark Girolami %F pmlr-v22-zhang12d %I PMLR %P 1425--1434 %U https://proceedings.mlr.press/v22/zhang12d.html %V 22 %X Kernel Support Vector Machine delivers state-of-the-art results in non-linear classification, but the need to maintain a large number of support vectors poses a challenge in large scale training and testing. In contrast, linear SVM is much more scalable even on limited computing recourses (e.g. daily life PCs), but the learned model cannot capture non-linear concepts. To scale up kernel SVM on limited resources, we propose a low-rank linearization approach that transforms a non-linear SVM to a linear one via a novel, approximate empirical kernel map computed from efficient low-rank approximation of kernel matrices. We call it LLSVM (Low-rank Linearized SVM). We theoretically study the gap between the solutions of the optimal and approximate kernel map, which is used in turn to provide important guidance on the sampling based kernel approximations. Our algorithm inherits high efficiency of linear SVMs and rich repesentability of kernel classifiers. Evaluation against large-scale linear and kernel SVMs on several truly large data sets shows that the proposed method achieves a better tradeoff between scalability and model representability.
RIS
TY - CPAPER TI - Scaling up Kernel SVM on Limited Resources: A Low-rank Linearization Approach AU - Kai Zhang AU - Liang Lan AU - Zhuang Wang AU - Fabian Moerchen BT - Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics DA - 2012/03/21 ED - Neil D. Lawrence ED - Mark Girolami ID - pmlr-v22-zhang12d PB - PMLR DP - Proceedings of Machine Learning Research VL - 22 SP - 1425 EP - 1434 L1 - http://proceedings.mlr.press/v22/zhang12d/zhang12d.pdf UR - https://proceedings.mlr.press/v22/zhang12d.html AB - Kernel Support Vector Machine delivers state-of-the-art results in non-linear classification, but the need to maintain a large number of support vectors poses a challenge in large scale training and testing. In contrast, linear SVM is much more scalable even on limited computing recourses (e.g. daily life PCs), but the learned model cannot capture non-linear concepts. To scale up kernel SVM on limited resources, we propose a low-rank linearization approach that transforms a non-linear SVM to a linear one via a novel, approximate empirical kernel map computed from efficient low-rank approximation of kernel matrices. We call it LLSVM (Low-rank Linearized SVM). We theoretically study the gap between the solutions of the optimal and approximate kernel map, which is used in turn to provide important guidance on the sampling based kernel approximations. Our algorithm inherits high efficiency of linear SVMs and rich repesentability of kernel classifiers. Evaluation against large-scale linear and kernel SVMs on several truly large data sets shows that the proposed method achieves a better tradeoff between scalability and model representability. ER -
APA
Zhang, K., Lan, L., Wang, Z. & Moerchen, F.. (2012). Scaling up Kernel SVM on Limited Resources: A Low-rank Linearization Approach. Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 22:1425-1434 Available from https://proceedings.mlr.press/v22/zhang12d.html.

Related Material