Learning Abstract Task Representations

Mikhail M. Meskhi, Adriano Rivolli, Rafael G. Mantovani, Ricardo Vilalta
AAAI Workshop on Meta-Learning and MetaDL Challenge, PMLR 140:127-137, 2021.

Abstract

A proper form of data characterization can guide the process of learning-algorithm selection and model-performance estimation. The field of meta-learning has provided a rich body of work describing effective forms of data characterization using different families of meta-features (statistical, model-based, information-theoretic, topological, etc.). In this paper, we start with the abundant set of existing meta-features and propose a method to induce new abstract meta-features as latent variables in a deep neural network. We discuss the pitfalls of using traditional meta-features directly and argue for the importance of learning high-level task properties. We demonstrate our methodology using a deep neural network as a feature extractor. We demonstrate that 1) induced meta-models mapping abstract meta-features to generalization metrics outperform other methods by $\~ 18%$ on average, and 2) abstract meta-features attain high feature-relevance scores.

Cite this Paper


BibTeX
@InProceedings{pmlr-v140-meskhi21a, title = {Learning Abstract Task Representations}, author = {Meskhi, Mikhail M. and Rivolli, Adriano and Mantovani, Rafael G. and Vilalta, Ricardo}, booktitle = {AAAI Workshop on Meta-Learning and MetaDL Challenge}, pages = {127--137}, year = {2021}, editor = {Guyon, Isabelle and van Rijn, Jan N. and Treguer, Sébastien and Vanschoren, Joaquin}, volume = {140}, series = {Proceedings of Machine Learning Research}, month = {09 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v140/meskhi21a/meskhi21a.pdf}, url = {https://proceedings.mlr.press/v140/meskhi21a.html}, abstract = {A proper form of data characterization can guide the process of learning-algorithm selection and model-performance estimation. The field of meta-learning has provided a rich body of work describing effective forms of data characterization using different families of meta-features (statistical, model-based, information-theoretic, topological, etc.). In this paper, we start with the abundant set of existing meta-features and propose a method to induce new abstract meta-features as latent variables in a deep neural network. We discuss the pitfalls of using traditional meta-features directly and argue for the importance of learning high-level task properties. We demonstrate our methodology using a deep neural network as a feature extractor. We demonstrate that 1) induced meta-models mapping abstract meta-features to generalization metrics outperform other methods by $\~ 18%$ on average, and 2) abstract meta-features attain high feature-relevance scores.} }
Endnote
%0 Conference Paper %T Learning Abstract Task Representations %A Mikhail M. Meskhi %A Adriano Rivolli %A Rafael G. Mantovani %A Ricardo Vilalta %B AAAI Workshop on Meta-Learning and MetaDL Challenge %C Proceedings of Machine Learning Research %D 2021 %E Isabelle Guyon %E Jan N. van Rijn %E Sébastien Treguer %E Joaquin Vanschoren %F pmlr-v140-meskhi21a %I PMLR %P 127--137 %U https://proceedings.mlr.press/v140/meskhi21a.html %V 140 %X A proper form of data characterization can guide the process of learning-algorithm selection and model-performance estimation. The field of meta-learning has provided a rich body of work describing effective forms of data characterization using different families of meta-features (statistical, model-based, information-theoretic, topological, etc.). In this paper, we start with the abundant set of existing meta-features and propose a method to induce new abstract meta-features as latent variables in a deep neural network. We discuss the pitfalls of using traditional meta-features directly and argue for the importance of learning high-level task properties. We demonstrate our methodology using a deep neural network as a feature extractor. We demonstrate that 1) induced meta-models mapping abstract meta-features to generalization metrics outperform other methods by $\~ 18%$ on average, and 2) abstract meta-features attain high feature-relevance scores.
APA
Meskhi, M.M., Rivolli, A., Mantovani, R.G. & Vilalta, R.. (2021). Learning Abstract Task Representations. AAAI Workshop on Meta-Learning and MetaDL Challenge, in Proceedings of Machine Learning Research 140:127-137 Available from https://proceedings.mlr.press/v140/meskhi21a.html.

Related Material