Leveraging Fully Observable Policies for Learning under Partial Observability

Hai Huu Nguyen, Andrea Baisero, Dian Wang, Christopher Amato, Robert Platt
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1673-1683, 2023.

Abstract

Reinforcement learning in partially observable domains is challenging due to the lack of observable state information. Thankfully, learning offline in a simulator with such state information is often possible. In particular, we propose a method for partially observable reinforcement learning that uses a fully observable policy (which we call a \emph{state expert}) during training to improve performance. Based on Soft Actor-Critic (SAC), our agent balances performing actions similar to the state expert and getting high returns under partial observability. Our approach can leverage the fully-observable policy for exploration and parts of the domain that are fully observable while still being able to learn under partial observability. On six robotics domains, our method outperforms pure imitation, pure reinforcement learning, the sequential or parallel combination of both types, and a recent state-of-the-art method in the same setting. A successful policy transfer to a physical robot in a manipulation task from pixels shows our approach’s practicality in learning interesting policies under partial observability.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-nguyen23a, title = {Leveraging Fully Observable Policies for Learning under Partial Observability}, author = {Nguyen, Hai Huu and Baisero, Andrea and Wang, Dian and Amato, Christopher and Platt, Robert}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1673--1683}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/nguyen23a/nguyen23a.pdf}, url = {https://proceedings.mlr.press/v205/nguyen23a.html}, abstract = {Reinforcement learning in partially observable domains is challenging due to the lack of observable state information. Thankfully, learning offline in a simulator with such state information is often possible. In particular, we propose a method for partially observable reinforcement learning that uses a fully observable policy (which we call a \emph{state expert}) during training to improve performance. Based on Soft Actor-Critic (SAC), our agent balances performing actions similar to the state expert and getting high returns under partial observability. Our approach can leverage the fully-observable policy for exploration and parts of the domain that are fully observable while still being able to learn under partial observability. On six robotics domains, our method outperforms pure imitation, pure reinforcement learning, the sequential or parallel combination of both types, and a recent state-of-the-art method in the same setting. A successful policy transfer to a physical robot in a manipulation task from pixels shows our approach’s practicality in learning interesting policies under partial observability.} }
Endnote
%0 Conference Paper %T Leveraging Fully Observable Policies for Learning under Partial Observability %A Hai Huu Nguyen %A Andrea Baisero %A Dian Wang %A Christopher Amato %A Robert Platt %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-nguyen23a %I PMLR %P 1673--1683 %U https://proceedings.mlr.press/v205/nguyen23a.html %V 205 %X Reinforcement learning in partially observable domains is challenging due to the lack of observable state information. Thankfully, learning offline in a simulator with such state information is often possible. In particular, we propose a method for partially observable reinforcement learning that uses a fully observable policy (which we call a \emph{state expert}) during training to improve performance. Based on Soft Actor-Critic (SAC), our agent balances performing actions similar to the state expert and getting high returns under partial observability. Our approach can leverage the fully-observable policy for exploration and parts of the domain that are fully observable while still being able to learn under partial observability. On six robotics domains, our method outperforms pure imitation, pure reinforcement learning, the sequential or parallel combination of both types, and a recent state-of-the-art method in the same setting. A successful policy transfer to a physical robot in a manipulation task from pixels shows our approach’s practicality in learning interesting policies under partial observability.
APA
Nguyen, H.H., Baisero, A., Wang, D., Amato, C. & Platt, R.. (2023). Leveraging Fully Observable Policies for Learning under Partial Observability. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1673-1683 Available from https://proceedings.mlr.press/v205/nguyen23a.html.

Related Material