Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning

Jianshu Hu, Paul Weng
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1299-1308, 2023.

Abstract

In this paper, we propose a novel deep reinforcement learning approach for improving the sample efficiency of a model-free actor-critic method by using a learned model to encourage exploration. The basic idea consists in generating artificial transitions with noisy actions, which can be used to update the critic. To counteract the model bias, we introduce a high initialization for the critic and two filters for the artificial transitions. Finally, we evaluate our approach with the TD3 algorithm on different robotic tasks and demonstrate that it achieves a better performance with higher sample efficiency than several other model-based and model-free methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-hu23a, title = {Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning}, author = {Hu, Jianshu and Weng, Paul}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1299--1308}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/hu23a/hu23a.pdf}, url = {https://proceedings.mlr.press/v205/hu23a.html}, abstract = {In this paper, we propose a novel deep reinforcement learning approach for improving the sample efficiency of a model-free actor-critic method by using a learned model to encourage exploration. The basic idea consists in generating artificial transitions with noisy actions, which can be used to update the critic. To counteract the model bias, we introduce a high initialization for the critic and two filters for the artificial transitions. Finally, we evaluate our approach with the TD3 algorithm on different robotic tasks and demonstrate that it achieves a better performance with higher sample efficiency than several other model-based and model-free methods.} }
Endnote
%0 Conference Paper %T Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning %A Jianshu Hu %A Paul Weng %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-hu23a %I PMLR %P 1299--1308 %U https://proceedings.mlr.press/v205/hu23a.html %V 205 %X In this paper, we propose a novel deep reinforcement learning approach for improving the sample efficiency of a model-free actor-critic method by using a learned model to encourage exploration. The basic idea consists in generating artificial transitions with noisy actions, which can be used to update the critic. To counteract the model bias, we introduce a high initialization for the critic and two filters for the artificial transitions. Finally, we evaluate our approach with the TD3 algorithm on different robotic tasks and demonstrate that it achieves a better performance with higher sample efficiency than several other model-based and model-free methods.
APA
Hu, J. & Weng, P.. (2023). Solving Complex Manipulation Tasks with Model-Assisted Model-Free Reinforcement Learning. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1299-1308 Available from https://proceedings.mlr.press/v205/hu23a.html.

Related Material