Model-Based Inverse Reinforcement Learning from Visual Demonstrations

Neha Das, Sarah Bechtle, Todor Davchev, Dinesh Jayaraman, Akshara Rai, Franziska Meier
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1930-1942, 2021.

Abstract

Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-das21a, title = {Model-Based Inverse Reinforcement Learning from Visual Demonstrations}, author = {Das, Neha and Bechtle, Sarah and Davchev, Todor and Jayaraman, Dinesh and Rai, Akshara and Meier, Franziska}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1930--1942}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/das21a/das21a.pdf}, url = {https://proceedings.mlr.press/v155/das21a.html}, abstract = {Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.} }
Endnote
%0 Conference Paper %T Model-Based Inverse Reinforcement Learning from Visual Demonstrations %A Neha Das %A Sarah Bechtle %A Todor Davchev %A Dinesh Jayaraman %A Akshara Rai %A Franziska Meier %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-das21a %I PMLR %P 1930--1942 %U https://proceedings.mlr.press/v155/das21a.html %V 155 %X Scaling model-based inverse reinforcement learning (IRL) to real robotic manipulation tasks with unknown dynamics remains an open problem. The key challenges lie in learning good dynamics models, developing algorithms that scale to high-dimensional state-spaces and being able to learn from both visual and proprioceptive demonstrations. In this work, we present a gradient-based inverse reinforcement learning framework that utilizes a pre-trained visual dynamics model to learn cost functions when given only visual human demonstrations. The learned cost functions are then used to reproduce the demonstrated behavior via visual model predictive control. We evaluate our framework on hardware on two basic object manipulation tasks.
APA
Das, N., Bechtle, S., Davchev, T., Jayaraman, D., Rai, A. & Meier, F.. (2021). Model-Based Inverse Reinforcement Learning from Visual Demonstrations. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1930-1942 Available from https://proceedings.mlr.press/v155/das21a.html.

Related Material