Learning Feasibility to Imitate Demonstrators with Different Dynamics

Zhangjie Cao, Yilun Hao, Mengxi Li, Dorsa Sadigh
Proceedings of the 5th Conference on Robot Learning, PMLR 164:363-372, 2022.

Abstract

The goal of learning from demonstrations is to learn a policy for an agent (imitator) by mimicking the behavior in the demonstrations. Prior works on learning from demonstrations assume that the demonstrations are collected by a demonstrator that has the same dynamics as the imitator. However, in many real-world applications, this assumption is limiting — to improve the problem of lack of data in robotics, we would like to be able to leverage demonstrations collected from agents with different dynamics. This can be challenging as the demonstrations might not even be feasible for the imitator. Our insight is that we can learn a feasibility metric that captures the likelihood of a demonstration being feasible by the imitator. We develop a feasibility MDP (f-MDP) and derive the feasibility score by learning an optimal policy in the f-MDP. Our proposed feasibility measure encourages the imitator to learn from more informative demonstrations, and disregard the far from feasible demonstrations. Our experiments on four simulated environments and on a real robot show that the policy learned with our approach achieves a higher expected return than prior works. We show the videos of the real robot arm experiments on our website.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-cao22a, title = {Learning Feasibility to Imitate Demonstrators with Different Dynamics}, author = {Cao, Zhangjie and Hao, Yilun and Li, Mengxi and Sadigh, Dorsa}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {363--372}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/cao22a/cao22a.pdf}, url = {https://proceedings.mlr.press/v164/cao22a.html}, abstract = {The goal of learning from demonstrations is to learn a policy for an agent (imitator) by mimicking the behavior in the demonstrations. Prior works on learning from demonstrations assume that the demonstrations are collected by a demonstrator that has the same dynamics as the imitator. However, in many real-world applications, this assumption is limiting — to improve the problem of lack of data in robotics, we would like to be able to leverage demonstrations collected from agents with different dynamics. This can be challenging as the demonstrations might not even be feasible for the imitator. Our insight is that we can learn a feasibility metric that captures the likelihood of a demonstration being feasible by the imitator. We develop a feasibility MDP (f-MDP) and derive the feasibility score by learning an optimal policy in the f-MDP. Our proposed feasibility measure encourages the imitator to learn from more informative demonstrations, and disregard the far from feasible demonstrations. Our experiments on four simulated environments and on a real robot show that the policy learned with our approach achieves a higher expected return than prior works. We show the videos of the real robot arm experiments on our website.} }
Endnote
%0 Conference Paper %T Learning Feasibility to Imitate Demonstrators with Different Dynamics %A Zhangjie Cao %A Yilun Hao %A Mengxi Li %A Dorsa Sadigh %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-cao22a %I PMLR %P 363--372 %U https://proceedings.mlr.press/v164/cao22a.html %V 164 %X The goal of learning from demonstrations is to learn a policy for an agent (imitator) by mimicking the behavior in the demonstrations. Prior works on learning from demonstrations assume that the demonstrations are collected by a demonstrator that has the same dynamics as the imitator. However, in many real-world applications, this assumption is limiting — to improve the problem of lack of data in robotics, we would like to be able to leverage demonstrations collected from agents with different dynamics. This can be challenging as the demonstrations might not even be feasible for the imitator. Our insight is that we can learn a feasibility metric that captures the likelihood of a demonstration being feasible by the imitator. We develop a feasibility MDP (f-MDP) and derive the feasibility score by learning an optimal policy in the f-MDP. Our proposed feasibility measure encourages the imitator to learn from more informative demonstrations, and disregard the far from feasible demonstrations. Our experiments on four simulated environments and on a real robot show that the policy learned with our approach achieves a higher expected return than prior works. We show the videos of the real robot arm experiments on our website.
APA
Cao, Z., Hao, Y., Li, M. & Sadigh, D.. (2022). Learning Feasibility to Imitate Demonstrators with Different Dynamics. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:363-372 Available from https://proceedings.mlr.press/v164/cao22a.html.

Related Material