Tell me why! Explanations support learning relational and causal structure

Andrew K Lampinen, Nicholas Roy, Ishita Dasgupta, Stephanie Cy Chan, Allison Tam, James Mcclelland, Chen Yan, Adam Santoro, Neil C Rabinowitz, Jane Wang, Felix Hill
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:11868-11890, 2022.

Abstract

Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents. For humans, language{—}particularly in the form of explanations{—}plays a considerable role in overcoming this challenge. Here, we show that language can play a similar role for deep RL agents in complex environments. While agents typically struggle to acquire relational and causal knowledge, augmenting their experience by training them to predict language descriptions and explanations can overcome these limitations. We show that language can help agents learn challenging relational tasks, and examine which aspects of language contribute to its benefits. We then show that explanations can help agents to infer not only relational but also causal structure. Language can shape the way that agents to generalize out-of-distribution from ambiguous, causally-confounded training, and explanations even allow agents to learn to perform experimental interventions to identify causal relationships. Our results suggest that language description and explanation may be powerful tools for improving agent learning and generalization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-lampinen22a, title = {Tell me why! {E}xplanations support learning relational and causal structure}, author = {Lampinen, Andrew K and Roy, Nicholas and Dasgupta, Ishita and Chan, Stephanie Cy and Tam, Allison and Mcclelland, James and Yan, Chen and Santoro, Adam and Rabinowitz, Neil C and Wang, Jane and Hill, Felix}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {11868--11890}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/lampinen22a/lampinen22a.pdf}, url = {https://proceedings.mlr.press/v162/lampinen22a.html}, abstract = {Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents. For humans, language{—}particularly in the form of explanations{—}plays a considerable role in overcoming this challenge. Here, we show that language can play a similar role for deep RL agents in complex environments. While agents typically struggle to acquire relational and causal knowledge, augmenting their experience by training them to predict language descriptions and explanations can overcome these limitations. We show that language can help agents learn challenging relational tasks, and examine which aspects of language contribute to its benefits. We then show that explanations can help agents to infer not only relational but also causal structure. Language can shape the way that agents to generalize out-of-distribution from ambiguous, causally-confounded training, and explanations even allow agents to learn to perform experimental interventions to identify causal relationships. Our results suggest that language description and explanation may be powerful tools for improving agent learning and generalization.} }
Endnote
%0 Conference Paper %T Tell me why! Explanations support learning relational and causal structure %A Andrew K Lampinen %A Nicholas Roy %A Ishita Dasgupta %A Stephanie Cy Chan %A Allison Tam %A James Mcclelland %A Chen Yan %A Adam Santoro %A Neil C Rabinowitz %A Jane Wang %A Felix Hill %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-lampinen22a %I PMLR %P 11868--11890 %U https://proceedings.mlr.press/v162/lampinen22a.html %V 162 %X Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents. For humans, language{—}particularly in the form of explanations{—}plays a considerable role in overcoming this challenge. Here, we show that language can play a similar role for deep RL agents in complex environments. While agents typically struggle to acquire relational and causal knowledge, augmenting their experience by training them to predict language descriptions and explanations can overcome these limitations. We show that language can help agents learn challenging relational tasks, and examine which aspects of language contribute to its benefits. We then show that explanations can help agents to infer not only relational but also causal structure. Language can shape the way that agents to generalize out-of-distribution from ambiguous, causally-confounded training, and explanations even allow agents to learn to perform experimental interventions to identify causal relationships. Our results suggest that language description and explanation may be powerful tools for improving agent learning and generalization.
APA
Lampinen, A.K., Roy, N., Dasgupta, I., Chan, S.C., Tam, A., Mcclelland, J., Yan, C., Santoro, A., Rabinowitz, N.C., Wang, J. & Hill, F.. (2022). Tell me why! Explanations support learning relational and causal structure. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:11868-11890 Available from https://proceedings.mlr.press/v162/lampinen22a.html.

Related Material