Learning Dexterous Manipulation from Suboptimal Experts

Rae Jeong, Jost Tobias Springenberg, Jackie Kay, Dan Zheng, Alexandre Galashov, Nicolas Heess, Francesco Nori
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:915-934, 2021.

Abstract

Learning dexterous manipulation in high-dimensional state-action spaces is an important open challenge with exploration presenting a major bottleneck. Although in many cases the learning process could be guided by demonstrations or other suboptimal experts, current RL algorithms for continuous action spaces often fail to effectively utilize combinations of highly off-policy expert data and on-policy exploration data. As a solution, we introduce Relative Entropy Q-Learning (REQ), a simple policy iteration algorithm that combines ideas from successful offline and conventional RL algorithms. It represents the optimal policy via importance sampling from a learned prior and is well-suited to take advantage of mixed data distributions. We demonstrate experimentally that REQ outperforms several strong baselines on robotic manipulation tasks for which suboptimal experts are available. We show how suboptimal experts can be constructed effectively by composing simple waypoint tracking controllers, and we also show how learned primitives can be combined with waypoint controllers to obtain reference behaviors to bootstrap a complex manipulation task on a simulated bimanual robot with human-like hands. Finally, we show that REQ is also effective for general off-policy RL, offline RL, and RL from demonstrations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-jeong21a, title = {Learning Dexterous Manipulation from Suboptimal Experts}, author = {Jeong, Rae and Springenberg, Jost Tobias and Kay, Jackie and Zheng, Dan and Galashov, Alexandre and Heess, Nicolas and Nori, Francesco}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {915--934}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/jeong21a/jeong21a.pdf}, url = {https://proceedings.mlr.press/v155/jeong21a.html}, abstract = {Learning dexterous manipulation in high-dimensional state-action spaces is an important open challenge with exploration presenting a major bottleneck. Although in many cases the learning process could be guided by demonstrations or other suboptimal experts, current RL algorithms for continuous action spaces often fail to effectively utilize combinations of highly off-policy expert data and on-policy exploration data. As a solution, we introduce Relative Entropy Q-Learning (REQ), a simple policy iteration algorithm that combines ideas from successful offline and conventional RL algorithms. It represents the optimal policy via importance sampling from a learned prior and is well-suited to take advantage of mixed data distributions. We demonstrate experimentally that REQ outperforms several strong baselines on robotic manipulation tasks for which suboptimal experts are available. We show how suboptimal experts can be constructed effectively by composing simple waypoint tracking controllers, and we also show how learned primitives can be combined with waypoint controllers to obtain reference behaviors to bootstrap a complex manipulation task on a simulated bimanual robot with human-like hands. Finally, we show that REQ is also effective for general off-policy RL, offline RL, and RL from demonstrations.} }
Endnote
%0 Conference Paper %T Learning Dexterous Manipulation from Suboptimal Experts %A Rae Jeong %A Jost Tobias Springenberg %A Jackie Kay %A Dan Zheng %A Alexandre Galashov %A Nicolas Heess %A Francesco Nori %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-jeong21a %I PMLR %P 915--934 %U https://proceedings.mlr.press/v155/jeong21a.html %V 155 %X Learning dexterous manipulation in high-dimensional state-action spaces is an important open challenge with exploration presenting a major bottleneck. Although in many cases the learning process could be guided by demonstrations or other suboptimal experts, current RL algorithms for continuous action spaces often fail to effectively utilize combinations of highly off-policy expert data and on-policy exploration data. As a solution, we introduce Relative Entropy Q-Learning (REQ), a simple policy iteration algorithm that combines ideas from successful offline and conventional RL algorithms. It represents the optimal policy via importance sampling from a learned prior and is well-suited to take advantage of mixed data distributions. We demonstrate experimentally that REQ outperforms several strong baselines on robotic manipulation tasks for which suboptimal experts are available. We show how suboptimal experts can be constructed effectively by composing simple waypoint tracking controllers, and we also show how learned primitives can be combined with waypoint controllers to obtain reference behaviors to bootstrap a complex manipulation task on a simulated bimanual robot with human-like hands. Finally, we show that REQ is also effective for general off-policy RL, offline RL, and RL from demonstrations.
APA
Jeong, R., Springenberg, J.T., Kay, J., Zheng, D., Galashov, A., Heess, N. & Nori, F.. (2021). Learning Dexterous Manipulation from Suboptimal Experts. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:915-934 Available from https://proceedings.mlr.press/v155/jeong21a.html.

Related Material