Refining Kernels for Regression and Uneven Classification Problems

Jaz S. Kandola, John Shawe-Taylor
Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, PMLR R4:157-162, 2003.

Abstract

Kernel alignment has recently been proposed as a method for measuring the degree of agreement between a kernel and a classification learning task. In this paper we extend the notion of kernel alignment to two other common learning problems: regression and classification with uneven data. We present a modified definition of alignment together with a novel theoretical justification for why improving alignment will lead to better performance in the regression case. Experimental evidence is provided to show that improving the alignment leads to a reduction in generalization error of standard regressors and classifiers.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR4-kandola03a, title = {Refining Kernels for Regression and Uneven Classification Problems}, author = {Kandola, Jaz S. and Shawe-Taylor, John}, booktitle = {Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics}, pages = {157--162}, year = {2003}, editor = {Bishop, Christopher M. and Frey, Brendan J.}, volume = {R4}, series = {Proceedings of Machine Learning Research}, month = {03--06 Jan}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/r4/kandola03a/kandola03a.pdf}, url = {https://proceedings.mlr.press/r4/kandola03a.html}, abstract = {Kernel alignment has recently been proposed as a method for measuring the degree of agreement between a kernel and a classification learning task. In this paper we extend the notion of kernel alignment to two other common learning problems: regression and classification with uneven data. We present a modified definition of alignment together with a novel theoretical justification for why improving alignment will lead to better performance in the regression case. Experimental evidence is provided to show that improving the alignment leads to a reduction in generalization error of standard regressors and classifiers.}, note = {Reissued by PMLR on 01 April 2021.} }
Endnote
%0 Conference Paper %T Refining Kernels for Regression and Uneven Classification Problems %A Jaz S. Kandola %A John Shawe-Taylor %B Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2003 %E Christopher M. Bishop %E Brendan J. Frey %F pmlr-vR4-kandola03a %I PMLR %P 157--162 %U https://proceedings.mlr.press/r4/kandola03a.html %V R4 %X Kernel alignment has recently been proposed as a method for measuring the degree of agreement between a kernel and a classification learning task. In this paper we extend the notion of kernel alignment to two other common learning problems: regression and classification with uneven data. We present a modified definition of alignment together with a novel theoretical justification for why improving alignment will lead to better performance in the regression case. Experimental evidence is provided to show that improving the alignment leads to a reduction in generalization error of standard regressors and classifiers. %Z Reissued by PMLR on 01 April 2021.
APA
Kandola, J.S. & Shawe-Taylor, J.. (2003). Refining Kernels for Regression and Uneven Classification Problems. Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research R4:157-162 Available from https://proceedings.mlr.press/r4/kandola03a.html. Reissued by PMLR on 01 April 2021.

Related Material