Improved Loss Bounds For Multiple Kernel Learning

Zakria Hussain, John Shawe–Taylor
Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, PMLR 15:370-377, 2011.

Abstract

We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and Ben-David (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher complexity bound which is additive in the (logarithmic) kernel complexity and margin term. This dependency is superior to all previously published Rademacher bounds for learning a convex combination of kernels, including the recent bound of Cortes et al. (2010), which exhibits a multiplicative interaction. We illustrate the tightness of our bounds with simulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v15-hussain11a, title = {Improved Loss Bounds For Multiple Kernel Learning}, author = {Hussain, Zakria and Shawe–Taylor, John}, booktitle = {Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics}, pages = {370--377}, year = {2011}, editor = {Gordon, Geoffrey and Dunson, David and Dudík, Miroslav}, volume = {15}, series = {Proceedings of Machine Learning Research}, address = {Fort Lauderdale, FL, USA}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v15/hussain11a/hussain11a.pdf}, url = {https://proceedings.mlr.press/v15/hussain11a.html}, abstract = {We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and Ben-David (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher complexity bound which is additive in the (logarithmic) kernel complexity and margin term. This dependency is superior to all previously published Rademacher bounds for learning a convex combination of kernels, including the recent bound of Cortes et al. (2010), which exhibits a multiplicative interaction. We illustrate the tightness of our bounds with simulations.} }
Endnote
%0 Conference Paper %T Improved Loss Bounds For Multiple Kernel Learning %A Zakria Hussain %A John Shawe–Taylor %B Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2011 %E Geoffrey Gordon %E David Dunson %E Miroslav Dudík %F pmlr-v15-hussain11a %I PMLR %P 370--377 %U https://proceedings.mlr.press/v15/hussain11a.html %V 15 %X We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and Ben-David (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher complexity bound which is additive in the (logarithmic) kernel complexity and margin term. This dependency is superior to all previously published Rademacher bounds for learning a convex combination of kernels, including the recent bound of Cortes et al. (2010), which exhibits a multiplicative interaction. We illustrate the tightness of our bounds with simulations.
RIS
TY - CPAPER TI - Improved Loss Bounds For Multiple Kernel Learning AU - Zakria Hussain AU - John Shawe–Taylor BT - Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics DA - 2011/06/14 ED - Geoffrey Gordon ED - David Dunson ED - Miroslav Dudík ID - pmlr-v15-hussain11a PB - PMLR DP - Proceedings of Machine Learning Research VL - 15 SP - 370 EP - 377 L1 - http://proceedings.mlr.press/v15/hussain11a/hussain11a.pdf UR - https://proceedings.mlr.press/v15/hussain11a.html AB - We propose two new generalization error bounds for multiple kernel learning (MKL). First, using the bound of Srebro and Ben-David (2006) as a starting point, we derive a new version which uses a simple counting argument for the choice of kernels in order to generate a tighter bound when 1-norm regularization (sparsity) is imposed in the kernel learning problem. The second bound is a Rademacher complexity bound which is additive in the (logarithmic) kernel complexity and margin term. This dependency is superior to all previously published Rademacher bounds for learning a convex combination of kernels, including the recent bound of Cortes et al. (2010), which exhibits a multiplicative interaction. We illustrate the tightness of our bounds with simulations. ER -
APA
Hussain, Z. & Shawe–Taylor, J.. (2011). Improved Loss Bounds For Multiple Kernel Learning. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 15:370-377 Available from https://proceedings.mlr.press/v15/hussain11a.html.

Related Material