Learning Causal Overhypotheses through Exploration in Children and Computational Models

Eliza Kosoy, Adrian Liu, Jasmine L Collins, David Chan, Jessica B Hamrick, Nan Rosemary Ke, Sandy Huang, Bryanna Kaufmann, John Canny, Alison Gopnik
Proceedings of the First Conference on Causal Learning and Reasoning, PMLR 177:390-406, 2022.

Abstract

Despite recent progress in reinforcement learning (RL), RL algorithms for exploration still remain an active area of research. Existing methods often focus on state-based metrics, which do not con-sider the underlying causal structures of the environment, and while recent research has begun to explore RL environments for causal learning, these environments primarily leverage causal information through causal inference or induction rather than exploration. In contrast, human children—some of the most proficient explorers—have been shown to use causal information to great benefit.In this work, we introduce a novel RL environment designed with a controllable causal structure, which allows us to evaluate exploration strategies used by both agents and children in a unified environment. In addition, through experimentation on both computation models and children, we demonstrate that there are significant differences between information-gain optimal RL exploration in causal environments and the exploration of children in the same environments. We leverage this new insight to lay the groundwork for future research into efficient exploration and disambiguation of causal structures for RL algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v177-kosoy22a, title = {Learning Causal Overhypotheses through Exploration in Children and Computational Models}, author = {Kosoy, Eliza and Liu, Adrian and Collins, Jasmine L and Chan, David and Hamrick, Jessica B and Ke, Nan Rosemary and Huang, Sandy and Kaufmann, Bryanna and Canny, John and Gopnik, Alison}, booktitle = {Proceedings of the First Conference on Causal Learning and Reasoning}, pages = {390--406}, year = {2022}, editor = {Schölkopf, Bernhard and Uhler, Caroline and Zhang, Kun}, volume = {177}, series = {Proceedings of Machine Learning Research}, month = {11--13 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v177/kosoy22a/kosoy22a.pdf}, url = {https://proceedings.mlr.press/v177/kosoy22a.html}, abstract = {Despite recent progress in reinforcement learning (RL), RL algorithms for exploration still remain an active area of research. Existing methods often focus on state-based metrics, which do not con-sider the underlying causal structures of the environment, and while recent research has begun to explore RL environments for causal learning, these environments primarily leverage causal information through causal inference or induction rather than exploration. In contrast, human children—some of the most proficient explorers—have been shown to use causal information to great benefit.In this work, we introduce a novel RL environment designed with a controllable causal structure, which allows us to evaluate exploration strategies used by both agents and children in a unified environment. In addition, through experimentation on both computation models and children, we demonstrate that there are significant differences between information-gain optimal RL exploration in causal environments and the exploration of children in the same environments. We leverage this new insight to lay the groundwork for future research into efficient exploration and disambiguation of causal structures for RL algorithms.} }
Endnote
%0 Conference Paper %T Learning Causal Overhypotheses through Exploration in Children and Computational Models %A Eliza Kosoy %A Adrian Liu %A Jasmine L Collins %A David Chan %A Jessica B Hamrick %A Nan Rosemary Ke %A Sandy Huang %A Bryanna Kaufmann %A John Canny %A Alison Gopnik %B Proceedings of the First Conference on Causal Learning and Reasoning %C Proceedings of Machine Learning Research %D 2022 %E Bernhard Schölkopf %E Caroline Uhler %E Kun Zhang %F pmlr-v177-kosoy22a %I PMLR %P 390--406 %U https://proceedings.mlr.press/v177/kosoy22a.html %V 177 %X Despite recent progress in reinforcement learning (RL), RL algorithms for exploration still remain an active area of research. Existing methods often focus on state-based metrics, which do not con-sider the underlying causal structures of the environment, and while recent research has begun to explore RL environments for causal learning, these environments primarily leverage causal information through causal inference or induction rather than exploration. In contrast, human children—some of the most proficient explorers—have been shown to use causal information to great benefit.In this work, we introduce a novel RL environment designed with a controllable causal structure, which allows us to evaluate exploration strategies used by both agents and children in a unified environment. In addition, through experimentation on both computation models and children, we demonstrate that there are significant differences between information-gain optimal RL exploration in causal environments and the exploration of children in the same environments. We leverage this new insight to lay the groundwork for future research into efficient exploration and disambiguation of causal structures for RL algorithms.
APA
Kosoy, E., Liu, A., Collins, J.L., Chan, D., Hamrick, J.B., Ke, N.R., Huang, S., Kaufmann, B., Canny, J. & Gopnik, A.. (2022). Learning Causal Overhypotheses through Exploration in Children and Computational Models. Proceedings of the First Conference on Causal Learning and Reasoning, in Proceedings of Machine Learning Research 177:390-406 Available from https://proceedings.mlr.press/v177/kosoy22a.html.

Related Material