Online Kernel Learning with a Near Optimal Sparsity Bound

Lijun Zhang, Jinfeng Yi, Rong Jin, Ming Lin, Xiaofei He
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):621-629, 2013.

Abstract

In this work, we focus on Online Sparse Kernel Learning that aims to online learn a kernel classifier with a bounded number of support vectors. Although many online learning algorithms have been proposed to learn a sparse kernel classifier, most of them fail to bound the number of support vectors used by the final solution which is the average of the intermediate kernel classifiers generated by online algorithms. The key idea of the proposed algorithm is to measure the difficulty in correctly classifying a training example by the derivative of a smooth loss function, and give a more chance to a difficult example to be a support vector than an easy one via a sampling scheme. Our analysis shows that when the loss function is smooth, the proposed algorithm yields similar performance guarantee as the standard online learning algorithm but with a near optimal number of support vectors (up to a poly(lnT) factor). Our empirical study shows promising performance of the proposed algorithm compared to the state-of-the-art algorithms for online sparse kernel learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-zhang13c, title = {Online Kernel Learning with a Near Optimal Sparsity Bound}, author = {Zhang, Lijun and Yi, Jinfeng and Jin, Rong and Lin, Ming and He, Xiaofei}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {621--629}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/zhang13c.pdf}, url = {https://proceedings.mlr.press/v28/zhang13c.html}, abstract = {In this work, we focus on Online Sparse Kernel Learning that aims to online learn a kernel classifier with a bounded number of support vectors. Although many online learning algorithms have been proposed to learn a sparse kernel classifier, most of them fail to bound the number of support vectors used by the final solution which is the average of the intermediate kernel classifiers generated by online algorithms. The key idea of the proposed algorithm is to measure the difficulty in correctly classifying a training example by the derivative of a smooth loss function, and give a more chance to a difficult example to be a support vector than an easy one via a sampling scheme. Our analysis shows that when the loss function is smooth, the proposed algorithm yields similar performance guarantee as the standard online learning algorithm but with a near optimal number of support vectors (up to a poly(lnT) factor). Our empirical study shows promising performance of the proposed algorithm compared to the state-of-the-art algorithms for online sparse kernel learning.} }
Endnote
%0 Conference Paper %T Online Kernel Learning with a Near Optimal Sparsity Bound %A Lijun Zhang %A Jinfeng Yi %A Rong Jin %A Ming Lin %A Xiaofei He %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-zhang13c %I PMLR %P 621--629 %U https://proceedings.mlr.press/v28/zhang13c.html %V 28 %N 3 %X In this work, we focus on Online Sparse Kernel Learning that aims to online learn a kernel classifier with a bounded number of support vectors. Although many online learning algorithms have been proposed to learn a sparse kernel classifier, most of them fail to bound the number of support vectors used by the final solution which is the average of the intermediate kernel classifiers generated by online algorithms. The key idea of the proposed algorithm is to measure the difficulty in correctly classifying a training example by the derivative of a smooth loss function, and give a more chance to a difficult example to be a support vector than an easy one via a sampling scheme. Our analysis shows that when the loss function is smooth, the proposed algorithm yields similar performance guarantee as the standard online learning algorithm but with a near optimal number of support vectors (up to a poly(lnT) factor). Our empirical study shows promising performance of the proposed algorithm compared to the state-of-the-art algorithms for online sparse kernel learning.
RIS
TY - CPAPER TI - Online Kernel Learning with a Near Optimal Sparsity Bound AU - Lijun Zhang AU - Jinfeng Yi AU - Rong Jin AU - Ming Lin AU - Xiaofei He BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-zhang13c PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 621 EP - 629 L1 - http://proceedings.mlr.press/v28/zhang13c.pdf UR - https://proceedings.mlr.press/v28/zhang13c.html AB - In this work, we focus on Online Sparse Kernel Learning that aims to online learn a kernel classifier with a bounded number of support vectors. Although many online learning algorithms have been proposed to learn a sparse kernel classifier, most of them fail to bound the number of support vectors used by the final solution which is the average of the intermediate kernel classifiers generated by online algorithms. The key idea of the proposed algorithm is to measure the difficulty in correctly classifying a training example by the derivative of a smooth loss function, and give a more chance to a difficult example to be a support vector than an easy one via a sampling scheme. Our analysis shows that when the loss function is smooth, the proposed algorithm yields similar performance guarantee as the standard online learning algorithm but with a near optimal number of support vectors (up to a poly(lnT) factor). Our empirical study shows promising performance of the proposed algorithm compared to the state-of-the-art algorithms for online sparse kernel learning. ER -
APA
Zhang, L., Yi, J., Jin, R., Lin, M. & He, X.. (2013). Online Kernel Learning with a Near Optimal Sparsity Bound. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):621-629 Available from https://proceedings.mlr.press/v28/zhang13c.html.

Related Material