Interpretable Latent Spaces for Learning from Demonstration

Yordan Hristov, Alex Lascarides, Subramanian Ramamoorthy
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:957-968, 2018.

Abstract

Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world. Models such as deep neural networks, with high capacity through their large parameter spaces, can be used to compress the high-dimensional sensory data to lower dimensional representations. These low-dimensional representations facilitate symbol grounding, but may not guarantee that the representation would be human-interpretable. We propose a method which utilises the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels. We demonstrate this through experiments with both simulated and real-world object data, showing that such alignment can be achieved in a process of physical symbol grounding.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-hristov18a, title = {Interpretable Latent Spaces for Learning from Demonstration}, author = {Hristov, Yordan and Lascarides, Alex and Ramamoorthy, Subramanian}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {957--968}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/hristov18a/hristov18a.pdf}, url = {https://proceedings.mlr.press/v87/hristov18a.html}, abstract = {Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world. Models such as deep neural networks, with high capacity through their large parameter spaces, can be used to compress the high-dimensional sensory data to lower dimensional representations. These low-dimensional representations facilitate symbol grounding, but may not guarantee that the representation would be human-interpretable. We propose a method which utilises the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels. We demonstrate this through experiments with both simulated and real-world object data, showing that such alignment can be achieved in a process of physical symbol grounding. } }
Endnote
%0 Conference Paper %T Interpretable Latent Spaces for Learning from Demonstration %A Yordan Hristov %A Alex Lascarides %A Subramanian Ramamoorthy %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-hristov18a %I PMLR %P 957--968 %U https://proceedings.mlr.press/v87/hristov18a.html %V 87 %X Effective human-robot interaction, such as in robot learning from human demonstration, requires the learning agent to be able to ground abstract concepts (such as those contained within instructions) in a corresponding high-dimensional sensory input stream from the world. Models such as deep neural networks, with high capacity through their large parameter spaces, can be used to compress the high-dimensional sensory data to lower dimensional representations. These low-dimensional representations facilitate symbol grounding, but may not guarantee that the representation would be human-interpretable. We propose a method which utilises the grouping of user-defined symbols and their corresponding sensory observations in order to align the learnt compressed latent representation with the semantic notions contained in the abstract labels. We demonstrate this through experiments with both simulated and real-world object data, showing that such alignment can be achieved in a process of physical symbol grounding.
APA
Hristov, Y., Lascarides, A. & Ramamoorthy, S.. (2018). Interpretable Latent Spaces for Learning from Demonstration. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:957-968 Available from https://proceedings.mlr.press/v87/hristov18a.html.

Related Material