[edit]
Towards robust episodic meta-learning
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1342-1351, 2021.
Abstract
Meta-learning learns across historical tasks with the goal to discover a representation from which it is easy to adapt to unseen tasks. Episodic meta-learning attempts to simulate a realistic setting by generating a set of small artificial tasks from a larger set of training tasks for meta-training and proceeds in a similar fashion for meta-testing. However, this (meta-)learning paradigm has recently been shown to be brittle, suggesting that the inductive bias encoded in the learned representations is inadequate. In this work we propose to compose episodes to robustify meta-learning in the few-shot setting in order to learn more efficiently and to generalize better to new tasks. We make use of active learning scoring rules to select the data to be included in the episodes. We assume that the meta-learner is given new tasks at random, but the data associated to the tasks can be selected from a larger pool of unlabeled data, and investigate where active learning can boost the performance of episodic meta-learning. We show that instead of selecting samples at random, it is better to select samples in an active manner especially in settings with out-of-distribution and class-imbalanced tasks. We evaluate our method with Prototypical Networks, foMAML and protoMAML, reporting significant improvements on public benchmarks.