A Dual Representation Framework for Robot Learning with Human Guidance

Ruohan Zhang, Dhruva Bansal, Yilun Hao, Ayano Hiranaka, Jialu Gao, Chen Wang, Roberto Martín-Martín, Li Fei-Fei, Jiajun Wu
Proceedings of The 6th Conference on Robot Learning, PMLR 205:738-750, 2023.

Abstract

The ability to interactively learn skills from human guidance and adjust behavior according to human preference is crucial to accelerating robot learning. But human guidance is an expensive resource, calling for methods that can learn efficiently. In this work, we argue that learning is more efficient if the agent is equipped with a high-level, symbolic representation. We propose a dual representation framework for robot learning from human guidance. The dual representation used by the robotic agent includes one for learning a sensorimotor control policy, and the other, in the form of a symbolic scene graph, for encoding the task-relevant information that motivates human input. We propose two novel learning algorithms based on this framework for learning from human evaluative feedback and from preference. In five continuous control tasks in simulation and in the real world, we demonstrate that our algorithms lead to significant improvement in task performance and learning speed. Additionally, these algorithms require less human effort and are qualitatively preferred by users.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-zhang23a, title = {A Dual Representation Framework for Robot Learning with Human Guidance}, author = {Zhang, Ruohan and Bansal, Dhruva and Hao, Yilun and Hiranaka, Ayano and Gao, Jialu and Wang, Chen and Mart\'in-Mart\'in, Roberto and Fei-Fei, Li and Wu, Jiajun}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {738--750}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/zhang23a/zhang23a.pdf}, url = {https://proceedings.mlr.press/v205/zhang23a.html}, abstract = {The ability to interactively learn skills from human guidance and adjust behavior according to human preference is crucial to accelerating robot learning. But human guidance is an expensive resource, calling for methods that can learn efficiently. In this work, we argue that learning is more efficient if the agent is equipped with a high-level, symbolic representation. We propose a dual representation framework for robot learning from human guidance. The dual representation used by the robotic agent includes one for learning a sensorimotor control policy, and the other, in the form of a symbolic scene graph, for encoding the task-relevant information that motivates human input. We propose two novel learning algorithms based on this framework for learning from human evaluative feedback and from preference. In five continuous control tasks in simulation and in the real world, we demonstrate that our algorithms lead to significant improvement in task performance and learning speed. Additionally, these algorithms require less human effort and are qualitatively preferred by users.} }
Endnote
%0 Conference Paper %T A Dual Representation Framework for Robot Learning with Human Guidance %A Ruohan Zhang %A Dhruva Bansal %A Yilun Hao %A Ayano Hiranaka %A Jialu Gao %A Chen Wang %A Roberto Martín-Martín %A Li Fei-Fei %A Jiajun Wu %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-zhang23a %I PMLR %P 738--750 %U https://proceedings.mlr.press/v205/zhang23a.html %V 205 %X The ability to interactively learn skills from human guidance and adjust behavior according to human preference is crucial to accelerating robot learning. But human guidance is an expensive resource, calling for methods that can learn efficiently. In this work, we argue that learning is more efficient if the agent is equipped with a high-level, symbolic representation. We propose a dual representation framework for robot learning from human guidance. The dual representation used by the robotic agent includes one for learning a sensorimotor control policy, and the other, in the form of a symbolic scene graph, for encoding the task-relevant information that motivates human input. We propose two novel learning algorithms based on this framework for learning from human evaluative feedback and from preference. In five continuous control tasks in simulation and in the real world, we demonstrate that our algorithms lead to significant improvement in task performance and learning speed. Additionally, these algorithms require less human effort and are qualitatively preferred by users.
APA
Zhang, R., Bansal, D., Hao, Y., Hiranaka, A., Gao, J., Wang, C., Martín-Martín, R., Fei-Fei, L. & Wu, J.. (2023). A Dual Representation Framework for Robot Learning with Human Guidance. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:738-750 Available from https://proceedings.mlr.press/v205/zhang23a.html.

Related Material