Learning Abstract Task Representations
AAAI Workshop on Meta-Learning and MetaDL Challenge, PMLR 140:127-137, 2021.
A proper form of data characterization can guide the process of learning-algorithm selection and model-performance estimation. The field of meta-learning has provided a rich body of work describing effective forms of data characterization using different families of meta-features (statistical, model-based, information-theoretic, topological, etc.). In this paper, we start with the abundant set of existing meta-features and propose a method to induce new abstract meta-features as latent variables in a deep neural network. We discuss the pitfalls of using traditional meta-features directly and argue for the importance of learning high-level task properties. We demonstrate our methodology using a deep neural network as a feature extractor. We demonstrate that 1) induced meta-models mapping abstract meta-features to generalization metrics outperform other methods by $\~ 18%$ on average, and 2) abstract meta-features attain high feature-relevance scores.