Challenges of Acquiring Compositional Inductive Biases via Meta-Learning

Eric Mitchell, Chelsea Finn, Chris Manning
AAAI Workshop on Meta-Learning and MetaDL Challenge, PMLR 140:138-148, 2021.

Abstract

Meta-learning is typically applied to settings where, given a distribution over related training tasks, the goal is to learn inductive biases that aid in generalization to new tasks from this distribution. Alternatively, we might consider a scenario where, given an inductive bias, we must construct a family of tasks that will inject the given inductive bias into a parametric model (e.g. a neural network) if meta-training is performed on the constructed task family. Inspired by recent work showing that such an algorithm can leverage meta-learning to improve generalization on a single-task learning problem, we consider various approaches to both a) the construction of the family of tasks and b) the procedure for selecting support sets for a particular single-task problem, the SCAN compositional generalization benchmark. We perform ablation experiments aimed at identifying when a meta-learning algorithm and family of tasks can impart the compositional inductive bias needed to solve SCAN. We conclude that existing meta-learning approaches to injecting compositional inductive biases are brittle and difficult to interpret, showing high sensitivity to both the family of meta-training tasks and the procedure for selecting support sets.

Cite this Paper


BibTeX
@InProceedings{pmlr-v140-mitchell21a, title = {Challenges of Acquiring Compositional Inductive Biases via Meta-Learning}, author = {Mitchell, Eric and Finn, Chelsea and Manning, Chris}, booktitle = {AAAI Workshop on Meta-Learning and MetaDL Challenge}, pages = {138--148}, year = {2021}, editor = {Guyon, Isabelle and van Rijn, Jan N. and Treguer, Sébastien and Vanschoren, Joaquin}, volume = {140}, series = {Proceedings of Machine Learning Research}, month = {09 Feb}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v140/mitchell21a/mitchell21a.pdf}, url = {https://proceedings.mlr.press/v140/mitchell21a.html}, abstract = {Meta-learning is typically applied to settings where, given a distribution over related training tasks, the goal is to learn inductive biases that aid in generalization to new tasks from this distribution. Alternatively, we might consider a scenario where, given an inductive bias, we must construct a family of tasks that will inject the given inductive bias into a parametric model (e.g. a neural network) if meta-training is performed on the constructed task family. Inspired by recent work showing that such an algorithm can leverage meta-learning to improve generalization on a single-task learning problem, we consider various approaches to both a) the construction of the family of tasks and b) the procedure for selecting support sets for a particular single-task problem, the SCAN compositional generalization benchmark. We perform ablation experiments aimed at identifying when a meta-learning algorithm and family of tasks can impart the compositional inductive bias needed to solve SCAN. We conclude that existing meta-learning approaches to injecting compositional inductive biases are brittle and difficult to interpret, showing high sensitivity to both the family of meta-training tasks and the procedure for selecting support sets.} }
Endnote
%0 Conference Paper %T Challenges of Acquiring Compositional Inductive Biases via Meta-Learning %A Eric Mitchell %A Chelsea Finn %A Chris Manning %B AAAI Workshop on Meta-Learning and MetaDL Challenge %C Proceedings of Machine Learning Research %D 2021 %E Isabelle Guyon %E Jan N. van Rijn %E Sébastien Treguer %E Joaquin Vanschoren %F pmlr-v140-mitchell21a %I PMLR %P 138--148 %U https://proceedings.mlr.press/v140/mitchell21a.html %V 140 %X Meta-learning is typically applied to settings where, given a distribution over related training tasks, the goal is to learn inductive biases that aid in generalization to new tasks from this distribution. Alternatively, we might consider a scenario where, given an inductive bias, we must construct a family of tasks that will inject the given inductive bias into a parametric model (e.g. a neural network) if meta-training is performed on the constructed task family. Inspired by recent work showing that such an algorithm can leverage meta-learning to improve generalization on a single-task learning problem, we consider various approaches to both a) the construction of the family of tasks and b) the procedure for selecting support sets for a particular single-task problem, the SCAN compositional generalization benchmark. We perform ablation experiments aimed at identifying when a meta-learning algorithm and family of tasks can impart the compositional inductive bias needed to solve SCAN. We conclude that existing meta-learning approaches to injecting compositional inductive biases are brittle and difficult to interpret, showing high sensitivity to both the family of meta-training tasks and the procedure for selecting support sets.
APA
Mitchell, E., Finn, C. & Manning, C.. (2021). Challenges of Acquiring Compositional Inductive Biases via Meta-Learning. AAAI Workshop on Meta-Learning and MetaDL Challenge, in Proceedings of Machine Learning Research 140:138-148 Available from https://proceedings.mlr.press/v140/mitchell21a.html.

Related Material