PLAS: Latent Action Space for Offline Reinforcement Learning

Wenxuan Zhou, Sujay Bajracharya, David Held
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:1719-1735, 2021.

Abstract

The goal of offline reinforcement learning is to learn a policy from a fixed dataset, without further interactions with the environment. This setting will be an increasingly more important paradigm for real-world applications of reinforcement learning such as robotics, in which data collection is slow and potentially dangerous. Existing off-policy algorithms have limited performance on static datasets due to extrapolation errors from out-of-distribution actions. This leads to the challenge of constraining the policy to select actions within the support of the dataset during training. We propose to simply learn the Policy in the Latent Action Space (PLAS) such that this requirement is naturally satisfied. We evaluate our method on continuous control benchmarks in simulation and a deformable object manipulation task with a physical robot. We demonstrate that our method provides competitive performance consistently across various continuous control tasks and different types of datasets, outperforming existing offline reinforcement learning methods with explicit constraints.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-zhou21b, title = {PLAS: Latent Action Space for Offline Reinforcement Learning}, author = {Zhou, Wenxuan and Bajracharya, Sujay and Held, David}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {1719--1735}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/zhou21b/zhou21b.pdf}, url = {https://proceedings.mlr.press/v155/zhou21b.html}, abstract = {The goal of offline reinforcement learning is to learn a policy from a fixed dataset, without further interactions with the environment. This setting will be an increasingly more important paradigm for real-world applications of reinforcement learning such as robotics, in which data collection is slow and potentially dangerous. Existing off-policy algorithms have limited performance on static datasets due to extrapolation errors from out-of-distribution actions. This leads to the challenge of constraining the policy to select actions within the support of the dataset during training. We propose to simply learn the Policy in the Latent Action Space (PLAS) such that this requirement is naturally satisfied. We evaluate our method on continuous control benchmarks in simulation and a deformable object manipulation task with a physical robot. We demonstrate that our method provides competitive performance consistently across various continuous control tasks and different types of datasets, outperforming existing offline reinforcement learning methods with explicit constraints.} }
Endnote
%0 Conference Paper %T PLAS: Latent Action Space for Offline Reinforcement Learning %A Wenxuan Zhou %A Sujay Bajracharya %A David Held %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-zhou21b %I PMLR %P 1719--1735 %U https://proceedings.mlr.press/v155/zhou21b.html %V 155 %X The goal of offline reinforcement learning is to learn a policy from a fixed dataset, without further interactions with the environment. This setting will be an increasingly more important paradigm for real-world applications of reinforcement learning such as robotics, in which data collection is slow and potentially dangerous. Existing off-policy algorithms have limited performance on static datasets due to extrapolation errors from out-of-distribution actions. This leads to the challenge of constraining the policy to select actions within the support of the dataset during training. We propose to simply learn the Policy in the Latent Action Space (PLAS) such that this requirement is naturally satisfied. We evaluate our method on continuous control benchmarks in simulation and a deformable object manipulation task with a physical robot. We demonstrate that our method provides competitive performance consistently across various continuous control tasks and different types of datasets, outperforming existing offline reinforcement learning methods with explicit constraints.
APA
Zhou, W., Bajracharya, S. & Held, D.. (2021). PLAS: Latent Action Space for Offline Reinforcement Learning. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:1719-1735 Available from https://proceedings.mlr.press/v155/zhou21b.html.

Related Material