Disentangled Relational Representations for Explaining and Learning from Demonstration

Yordan Hristov, Daniel Angelov, Michael Burke, Alex Lascarides, Subramanian Ramamoorthy
Proceedings of the Conference on Robot Learning, PMLR 100:870-884, 2020.

Abstract

Learning from demonstration is an effective method for human users to instruct desired robot behaviour. However, for most non-trivial tasks of practical interest, efficient learning from demonstration depends crucially on inductive bias in the chosen structure for rewards/costs and policies. We address the case where this inductive bias comes from an exchange with a human user. We propose a method in which a learning agent utilizes the information bottleneck layer of a high-parameter variational neural model, with auxiliary loss terms, in order to ground abstract concepts such as spatial relations. The concepts are referred to in natural language instructions and are manifested in the high-dimensional sensory input stream the agent receives from the world. We evaluate the properties of the latent space of the learned model in a photorealistic synthetic environment and particularly focus on examining its usability for downstream tasks. Additionally, through a series of controlled table-top manipulation experiments, we demonstrate that the learned manifold can be used to ground demonstrations as symbolic plans, which can then be executed on a PR2 robot.

Cite this Paper


BibTeX
@InProceedings{pmlr-v100-hristov20a, title = {Disentangled Relational Representations for Explaining and Learning from Demonstration}, author = {Hristov, Yordan and Angelov, Daniel and Burke, Michael and Lascarides, Alex and Ramamoorthy, Subramanian}, booktitle = {Proceedings of the Conference on Robot Learning}, pages = {870--884}, year = {2020}, editor = {Kaelbling, Leslie Pack and Kragic, Danica and Sugiura, Komei}, volume = {100}, series = {Proceedings of Machine Learning Research}, month = {30 Oct--01 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v100/hristov20a/hristov20a.pdf}, url = {https://proceedings.mlr.press/v100/hristov20a.html}, abstract = {Learning from demonstration is an effective method for human users to instruct desired robot behaviour. However, for most non-trivial tasks of practical interest, efficient learning from demonstration depends crucially on inductive bias in the chosen structure for rewards/costs and policies. We address the case where this inductive bias comes from an exchange with a human user. We propose a method in which a learning agent utilizes the information bottleneck layer of a high-parameter variational neural model, with auxiliary loss terms, in order to ground abstract concepts such as spatial relations. The concepts are referred to in natural language instructions and are manifested in the high-dimensional sensory input stream the agent receives from the world. We evaluate the properties of the latent space of the learned model in a photorealistic synthetic environment and particularly focus on examining its usability for downstream tasks. Additionally, through a series of controlled table-top manipulation experiments, we demonstrate that the learned manifold can be used to ground demonstrations as symbolic plans, which can then be executed on a PR2 robot.} }
Endnote
%0 Conference Paper %T Disentangled Relational Representations for Explaining and Learning from Demonstration %A Yordan Hristov %A Daniel Angelov %A Michael Burke %A Alex Lascarides %A Subramanian Ramamoorthy %B Proceedings of the Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2020 %E Leslie Pack Kaelbling %E Danica Kragic %E Komei Sugiura %F pmlr-v100-hristov20a %I PMLR %P 870--884 %U https://proceedings.mlr.press/v100/hristov20a.html %V 100 %X Learning from demonstration is an effective method for human users to instruct desired robot behaviour. However, for most non-trivial tasks of practical interest, efficient learning from demonstration depends crucially on inductive bias in the chosen structure for rewards/costs and policies. We address the case where this inductive bias comes from an exchange with a human user. We propose a method in which a learning agent utilizes the information bottleneck layer of a high-parameter variational neural model, with auxiliary loss terms, in order to ground abstract concepts such as spatial relations. The concepts are referred to in natural language instructions and are manifested in the high-dimensional sensory input stream the agent receives from the world. We evaluate the properties of the latent space of the learned model in a photorealistic synthetic environment and particularly focus on examining its usability for downstream tasks. Additionally, through a series of controlled table-top manipulation experiments, we demonstrate that the learned manifold can be used to ground demonstrations as symbolic plans, which can then be executed on a PR2 robot.
APA
Hristov, Y., Angelov, D., Burke, M., Lascarides, A. & Ramamoorthy, S.. (2020). Disentangled Relational Representations for Explaining and Learning from Demonstration. Proceedings of the Conference on Robot Learning, in Proceedings of Machine Learning Research 100:870-884 Available from https://proceedings.mlr.press/v100/hristov20a.html.

Related Material