Towards robust episodic meta-learning

Beyza Ermis, Giovanni Zappella, Cédric Archambeau
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1342-1351, 2021.

Abstract

Meta-learning learns across historical tasks with the goal to discover a representation from which it is easy to adapt to unseen tasks. Episodic meta-learning attempts to simulate a realistic setting by generating a set of small artificial tasks from a larger set of training tasks for meta-training and proceeds in a similar fashion for meta-testing. However, this (meta-)learning paradigm has recently been shown to be brittle, suggesting that the inductive bias encoded in the learned representations is inadequate. In this work we propose to compose episodes to robustify meta-learning in the few-shot setting in order to learn more efficiently and to generalize better to new tasks. We make use of active learning scoring rules to select the data to be included in the episodes. We assume that the meta-learner is given new tasks at random, but the data associated to the tasks can be selected from a larger pool of unlabeled data, and investigate where active learning can boost the performance of episodic meta-learning. We show that instead of selecting samples at random, it is better to select samples in an active manner especially in settings with out-of-distribution and class-imbalanced tasks. We evaluate our method with Prototypical Networks, foMAML and protoMAML, reporting significant improvements on public benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-ermis21a, title = {Towards robust episodic meta-learning}, author = {Ermis, Beyza and Zappella, Giovanni and Archambeau, C\'edric}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {1342--1351}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/ermis21a/ermis21a.pdf}, url = {https://proceedings.mlr.press/v161/ermis21a.html}, abstract = {Meta-learning learns across historical tasks with the goal to discover a representation from which it is easy to adapt to unseen tasks. Episodic meta-learning attempts to simulate a realistic setting by generating a set of small artificial tasks from a larger set of training tasks for meta-training and proceeds in a similar fashion for meta-testing. However, this (meta-)learning paradigm has recently been shown to be brittle, suggesting that the inductive bias encoded in the learned representations is inadequate. In this work we propose to compose episodes to robustify meta-learning in the few-shot setting in order to learn more efficiently and to generalize better to new tasks. We make use of active learning scoring rules to select the data to be included in the episodes. We assume that the meta-learner is given new tasks at random, but the data associated to the tasks can be selected from a larger pool of unlabeled data, and investigate where active learning can boost the performance of episodic meta-learning. We show that instead of selecting samples at random, it is better to select samples in an active manner especially in settings with out-of-distribution and class-imbalanced tasks. We evaluate our method with Prototypical Networks, foMAML and protoMAML, reporting significant improvements on public benchmarks.} }
Endnote
%0 Conference Paper %T Towards robust episodic meta-learning %A Beyza Ermis %A Giovanni Zappella %A Cédric Archambeau %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-ermis21a %I PMLR %P 1342--1351 %U https://proceedings.mlr.press/v161/ermis21a.html %V 161 %X Meta-learning learns across historical tasks with the goal to discover a representation from which it is easy to adapt to unseen tasks. Episodic meta-learning attempts to simulate a realistic setting by generating a set of small artificial tasks from a larger set of training tasks for meta-training and proceeds in a similar fashion for meta-testing. However, this (meta-)learning paradigm has recently been shown to be brittle, suggesting that the inductive bias encoded in the learned representations is inadequate. In this work we propose to compose episodes to robustify meta-learning in the few-shot setting in order to learn more efficiently and to generalize better to new tasks. We make use of active learning scoring rules to select the data to be included in the episodes. We assume that the meta-learner is given new tasks at random, but the data associated to the tasks can be selected from a larger pool of unlabeled data, and investigate where active learning can boost the performance of episodic meta-learning. We show that instead of selecting samples at random, it is better to select samples in an active manner especially in settings with out-of-distribution and class-imbalanced tasks. We evaluate our method with Prototypical Networks, foMAML and protoMAML, reporting significant improvements on public benchmarks.
APA
Ermis, B., Zappella, G. & Archambeau, C.. (2021). Towards robust episodic meta-learning. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:1342-1351 Available from https://proceedings.mlr.press/v161/ermis21a.html.

Related Material