[edit]
Fast Learning Rate of Multiple Kernel Learning: Trade-Off between Sparsity and Smoothness
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, PMLR 22:1152-1183, 2012.
Abstract
We investigate the learning rate of multiple kernel leaning (MKL) with L1 and elastic-net regularizations. The elastic-net regularization is a composition of an L1-regularizer for inducing the sparsity and an L2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates ever shown for both L1 and elastic-net regularizations. Our analysis shows there appears a trade-off between the sparsity and the smoothness when it comes to selecting which of L1 and elastic-net regularizations to use; if the ground truth is smooth, the elastic-net regularization is preferred, otherwise the L1 regularization is preferred.