[edit]
Transfer Learning by Kernel Meta-Learning
Proceedings of ICML Workshop on Unsupervised and Transfer Learning, PMLR 27:81-95, 2012.
Abstract
A crucial issue in machine learning is how to learn appropriate representations for data. Recently, much work has been devoted to kernel learning, that is, the problem of finding a good kernel matrix for a given task. This can be done in a semi-supervised learning setting by using a large set of unlabeled data and a (typically small) set of i.i.d. labeled data. Another, even more challenging problem, is how one can exploit partially labeled data of a source task to learn good representations for a different, but related, target task. This is the main subject of transfer learning. In this paper, we present a novel approach to transfer learning based on kernel learning. Specifically, we propose a kernel meta-learning algorithm which, starting from a basic kernel, tries to learn chains of kernel transforms that are able to produce good kernel matrices for the source tasks. The same sequence of transformations can be then applied to compute the kernel matrix for new related target tasks. We report on the application of this method to the five datasets of the Unsupervised and Transfer Learning (UTL) challenge benchmark, where we won the first phase of the competition.