Practical Large Scale Classification with Additive Kernels

Hao Yang, Jianxin Wu
Proceedings of the Asian Conference on Machine Learning, PMLR 25:523-538, 2012.

Abstract

For classification problems with millions of training examples or dimensions, accuracy, training and testing speed and memory usage are the main concerns. Recent advances have allowed linear SVM to tackle problems with moderate time and space cost, but for many tasks in computer vision, additive kernels would have higher accuracies. In this paper, we propose the PmSVM-LUT algorithm that employs Look-Up Tables to boost the training and testing speed and save memory usage of additive kernel SVM classification, in order to meet the needs of large scale problems. The PmSVM-LUT algorithm is based on PmSVM (Wu, 2012), which employed polynomial approximation for the gradient function to speedup the dual coordinate descent method. We also analyze the polynomial approximation numerically to demonstrate its validity. Empirically, our algorithm is faster than PmSVM and feature mapping in many datasets with higher classification accuracies and can save up to 60% memory usage as well.

Cite this Paper


BibTeX
@InProceedings{pmlr-v25-yang12, title = {Practical Large Scale Classification with Additive Kernels}, author = {Yang, Hao and Wu, Jianxin}, booktitle = {Proceedings of the Asian Conference on Machine Learning}, pages = {523--538}, year = {2012}, editor = {Hoi, Steven C. H. and Buntine, Wray}, volume = {25}, series = {Proceedings of Machine Learning Research}, address = {Singapore Management University, Singapore}, month = {04--06 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v25/yang12/yang12.pdf}, url = {https://proceedings.mlr.press/v25/yang12.html}, abstract = {For classification problems with millions of training examples or dimensions, accuracy, training and testing speed and memory usage are the main concerns. Recent advances have allowed linear SVM to tackle problems with moderate time and space cost, but for many tasks in computer vision, additive kernels would have higher accuracies. In this paper, we propose the PmSVM-LUT algorithm that employs Look-Up Tables to boost the training and testing speed and save memory usage of additive kernel SVM classification, in order to meet the needs of large scale problems. The PmSVM-LUT algorithm is based on PmSVM (Wu, 2012), which employed polynomial approximation for the gradient function to speedup the dual coordinate descent method. We also analyze the polynomial approximation numerically to demonstrate its validity. Empirically, our algorithm is faster than PmSVM and feature mapping in many datasets with higher classification accuracies and can save up to 60% memory usage as well.} }
Endnote
%0 Conference Paper %T Practical Large Scale Classification with Additive Kernels %A Hao Yang %A Jianxin Wu %B Proceedings of the Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2012 %E Steven C. H. Hoi %E Wray Buntine %F pmlr-v25-yang12 %I PMLR %P 523--538 %U https://proceedings.mlr.press/v25/yang12.html %V 25 %X For classification problems with millions of training examples or dimensions, accuracy, training and testing speed and memory usage are the main concerns. Recent advances have allowed linear SVM to tackle problems with moderate time and space cost, but for many tasks in computer vision, additive kernels would have higher accuracies. In this paper, we propose the PmSVM-LUT algorithm that employs Look-Up Tables to boost the training and testing speed and save memory usage of additive kernel SVM classification, in order to meet the needs of large scale problems. The PmSVM-LUT algorithm is based on PmSVM (Wu, 2012), which employed polynomial approximation for the gradient function to speedup the dual coordinate descent method. We also analyze the polynomial approximation numerically to demonstrate its validity. Empirically, our algorithm is faster than PmSVM and feature mapping in many datasets with higher classification accuracies and can save up to 60% memory usage as well.
RIS
TY - CPAPER TI - Practical Large Scale Classification with Additive Kernels AU - Hao Yang AU - Jianxin Wu BT - Proceedings of the Asian Conference on Machine Learning DA - 2012/11/17 ED - Steven C. H. Hoi ED - Wray Buntine ID - pmlr-v25-yang12 PB - PMLR DP - Proceedings of Machine Learning Research VL - 25 SP - 523 EP - 538 L1 - http://proceedings.mlr.press/v25/yang12/yang12.pdf UR - https://proceedings.mlr.press/v25/yang12.html AB - For classification problems with millions of training examples or dimensions, accuracy, training and testing speed and memory usage are the main concerns. Recent advances have allowed linear SVM to tackle problems with moderate time and space cost, but for many tasks in computer vision, additive kernels would have higher accuracies. In this paper, we propose the PmSVM-LUT algorithm that employs Look-Up Tables to boost the training and testing speed and save memory usage of additive kernel SVM classification, in order to meet the needs of large scale problems. The PmSVM-LUT algorithm is based on PmSVM (Wu, 2012), which employed polynomial approximation for the gradient function to speedup the dual coordinate descent method. We also analyze the polynomial approximation numerically to demonstrate its validity. Empirically, our algorithm is faster than PmSVM and feature mapping in many datasets with higher classification accuracies and can save up to 60% memory usage as well. ER -
APA
Yang, H. & Wu, J.. (2012). Practical Large Scale Classification with Additive Kernels. Proceedings of the Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 25:523-538 Available from https://proceedings.mlr.press/v25/yang12.html.

Related Material