Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation

I-Chun Arthur Liu, Shagun Uppal, Gaurav S. Sukhatme, Joseph J Lim, Peter Englert, Youngwoon Lee
Proceedings of the 5th Conference on Robot Learning, PMLR 164:641-650, 2022.

Abstract

Learning complex manipulation tasks in realistic, obstructed environments is a challenging problem due to hard exploration in the presence of obstacles and high-dimensional visual observations. Prior work tackles the exploration problem by integrating motion planning and reinforcement learning. However, the motion planner augmented policy requires access to state information, which is often not available in the real-world settings. To this end, we propose to distill a state-based motion planner augmented policy to a visual control policy via (1) visual behavioral cloning to remove the motion planner dependency along with its jittery motion, and (2) vision-based reinforcement learning with the guidance of the smoothed trajectories from the behavioral cloning agent. We evaluate our method on three manipulation tasks in obstructed environments and compare it against various reinforcement learning and imitation learning baselines. The results demonstrate that our framework is highly sample-efficient and outperforms the state-of-the-art algorithms. Moreover, coupled with domain randomization, our policy is capable of zero-shot transfer to unseen environment settings with distractors. Code and videos are available at https://clvrai.com/mopa-pd.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-liu22b, title = {Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation}, author = {Liu, I-Chun Arthur and Uppal, Shagun and Sukhatme, Gaurav S. and Lim, Joseph J and Englert, Peter and Lee, Youngwoon}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {641--650}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/liu22b/liu22b.pdf}, url = {https://proceedings.mlr.press/v164/liu22b.html}, abstract = {Learning complex manipulation tasks in realistic, obstructed environments is a challenging problem due to hard exploration in the presence of obstacles and high-dimensional visual observations. Prior work tackles the exploration problem by integrating motion planning and reinforcement learning. However, the motion planner augmented policy requires access to state information, which is often not available in the real-world settings. To this end, we propose to distill a state-based motion planner augmented policy to a visual control policy via (1) visual behavioral cloning to remove the motion planner dependency along with its jittery motion, and (2) vision-based reinforcement learning with the guidance of the smoothed trajectories from the behavioral cloning agent. We evaluate our method on three manipulation tasks in obstructed environments and compare it against various reinforcement learning and imitation learning baselines. The results demonstrate that our framework is highly sample-efficient and outperforms the state-of-the-art algorithms. Moreover, coupled with domain randomization, our policy is capable of zero-shot transfer to unseen environment settings with distractors. Code and videos are available at https://clvrai.com/mopa-pd.} }
Endnote
%0 Conference Paper %T Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation %A I-Chun Arthur Liu %A Shagun Uppal %A Gaurav S. Sukhatme %A Joseph J Lim %A Peter Englert %A Youngwoon Lee %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-liu22b %I PMLR %P 641--650 %U https://proceedings.mlr.press/v164/liu22b.html %V 164 %X Learning complex manipulation tasks in realistic, obstructed environments is a challenging problem due to hard exploration in the presence of obstacles and high-dimensional visual observations. Prior work tackles the exploration problem by integrating motion planning and reinforcement learning. However, the motion planner augmented policy requires access to state information, which is often not available in the real-world settings. To this end, we propose to distill a state-based motion planner augmented policy to a visual control policy via (1) visual behavioral cloning to remove the motion planner dependency along with its jittery motion, and (2) vision-based reinforcement learning with the guidance of the smoothed trajectories from the behavioral cloning agent. We evaluate our method on three manipulation tasks in obstructed environments and compare it against various reinforcement learning and imitation learning baselines. The results demonstrate that our framework is highly sample-efficient and outperforms the state-of-the-art algorithms. Moreover, coupled with domain randomization, our policy is capable of zero-shot transfer to unseen environment settings with distractors. Code and videos are available at https://clvrai.com/mopa-pd.
APA
Liu, I.A., Uppal, S., Sukhatme, G.S., Lim, J.J., Englert, P. & Lee, Y.. (2022). Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:641-650 Available from https://proceedings.mlr.press/v164/liu22b.html.

Related Material