[edit]
Limits of Model Selection under Transfer Learning
Proceedings of Thirty Sixth Conference on Learning Theory, PMLR 195:5781-5812, 2023.
Abstract
Theoretical studies on \emph{transfer learning} (or \emph{domain adaptation}) have so far focused on situations with a known hypothesis class or \emph{model}; however in practice, some amount of model selection is usually involved, often appearing under the umbrella term or \emph{hyperparameter-tuning}: for example, one may think of the problem of \emph{tuning} for the right neural network architecture towards a target task, while leveraging data from a related \emph{source} task. Now, in addition to the usual tradeoffs on approximation vs estimation errors involved in model selection, this problem brings in a new complexity term, namely, the \emph{transfer distance} between source and target distributions, which is known to vary with the choice of hypothesis class. We present a first study of this problem, focusing on classification; in particular, the analysis reveals some remarkable phenomena: \emph{adaptive rates}, i.e., those achievable with no distributional information, can be arbitrarily slower than \emph{oracle rates}, i.e., when given knowledge on \emph{distances}