Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks

Micah Goldblum, Steven Reich, Liam Fowl, Renkun Ni, Valeriia Cherepanova, Tom Goldstein
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3607-3616, 2020.

Abstract

Meta-learning algorithms produce feature extractors which achieve state-of-the-art performance on few-shot classification. While the literature is rich with meta-learning methods, little is known about why the resulting feature extractors perform so well. We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models which are trained classically. In doing so, we introduce and verify several hypotheses for why meta-learned models perform better. Furthermore, we develop a regularizer which boosts the performance of standard training routines for few-shot classification. In many cases, our routine outperforms meta-learning while simultaneously running an order of magnitude faster.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-goldblum20a, title = {Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks}, author = {Goldblum, Micah and Reich, Steven and Fowl, Liam and Ni, Renkun and Cherepanova, Valeriia and Goldstein, Tom}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3607--3616}, year = {2020}, editor = {Hal Daumé III and Aarti Singh}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/goldblum20a/goldblum20a.pdf}, url = { http://proceedings.mlr.press/v119/goldblum20a.html }, abstract = {Meta-learning algorithms produce feature extractors which achieve state-of-the-art performance on few-shot classification. While the literature is rich with meta-learning methods, little is known about why the resulting feature extractors perform so well. We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models which are trained classically. In doing so, we introduce and verify several hypotheses for why meta-learned models perform better. Furthermore, we develop a regularizer which boosts the performance of standard training routines for few-shot classification. In many cases, our routine outperforms meta-learning while simultaneously running an order of magnitude faster.} }
Endnote
%0 Conference Paper %T Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks %A Micah Goldblum %A Steven Reich %A Liam Fowl %A Renkun Ni %A Valeriia Cherepanova %A Tom Goldstein %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-goldblum20a %I PMLR %P 3607--3616 %U http://proceedings.mlr.press/v119/goldblum20a.html %V 119 %X Meta-learning algorithms produce feature extractors which achieve state-of-the-art performance on few-shot classification. While the literature is rich with meta-learning methods, little is known about why the resulting feature extractors perform so well. We develop a better understanding of the underlying mechanics of meta-learning and the difference between models trained using meta-learning and models which are trained classically. In doing so, we introduce and verify several hypotheses for why meta-learned models perform better. Furthermore, we develop a regularizer which boosts the performance of standard training routines for few-shot classification. In many cases, our routine outperforms meta-learning while simultaneously running an order of magnitude faster.
APA
Goldblum, M., Reich, S., Fowl, L., Ni, R., Cherepanova, V. & Goldstein, T.. (2020). Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3607-3616 Available from http://proceedings.mlr.press/v119/goldblum20a.html .

Related Material