How to Explore with Belief: State Entropy Maximization in POMDPs

Riccardo Zamboni, Duilio Cirino, Marcello Restelli, Mirco Mutti
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:58140-58157, 2024.

Abstract

Recent works have studied state entropy maximization in reinforcement learning, in which the agent’s objective is to learn a policy inducing high entropy over states visitation (Hazan et al., 2019). They typically assume full observability of the state of the system, so that the entropy of the observations is maximized. In practice, the agent may only get partial observations, e.g., a robot perceiving the state of a physical space through proximity sensors and cameras. A significant mismatch between the entropy over observations and true states of the system can arise in those settings. In this paper, we address the problem of entropy maximization over the true states with a decision policy conditioned on partial observations only. The latter is a generalization of POMDPs, which is intractable in general. We develop a memory and computationally efficient policy gradient method to address a first-order relaxation of the objective defined on belief states, providing various formal characterizations of approximation gaps, the optimization landscape, and the hallucination problem. This paper aims to generalize state entropy maximization to more realistic domains that meet the challenges of applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-zamboni24a, title = {How to Explore with Belief: State Entropy Maximization in {POMDP}s}, author = {Zamboni, Riccardo and Cirino, Duilio and Restelli, Marcello and Mutti, Mirco}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {58140--58157}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/zamboni24a/zamboni24a.pdf}, url = {https://proceedings.mlr.press/v235/zamboni24a.html}, abstract = {Recent works have studied state entropy maximization in reinforcement learning, in which the agent’s objective is to learn a policy inducing high entropy over states visitation (Hazan et al., 2019). They typically assume full observability of the state of the system, so that the entropy of the observations is maximized. In practice, the agent may only get partial observations, e.g., a robot perceiving the state of a physical space through proximity sensors and cameras. A significant mismatch between the entropy over observations and true states of the system can arise in those settings. In this paper, we address the problem of entropy maximization over the true states with a decision policy conditioned on partial observations only. The latter is a generalization of POMDPs, which is intractable in general. We develop a memory and computationally efficient policy gradient method to address a first-order relaxation of the objective defined on belief states, providing various formal characterizations of approximation gaps, the optimization landscape, and the hallucination problem. This paper aims to generalize state entropy maximization to more realistic domains that meet the challenges of applications.} }
Endnote
%0 Conference Paper %T How to Explore with Belief: State Entropy Maximization in POMDPs %A Riccardo Zamboni %A Duilio Cirino %A Marcello Restelli %A Mirco Mutti %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-zamboni24a %I PMLR %P 58140--58157 %U https://proceedings.mlr.press/v235/zamboni24a.html %V 235 %X Recent works have studied state entropy maximization in reinforcement learning, in which the agent’s objective is to learn a policy inducing high entropy over states visitation (Hazan et al., 2019). They typically assume full observability of the state of the system, so that the entropy of the observations is maximized. In practice, the agent may only get partial observations, e.g., a robot perceiving the state of a physical space through proximity sensors and cameras. A significant mismatch between the entropy over observations and true states of the system can arise in those settings. In this paper, we address the problem of entropy maximization over the true states with a decision policy conditioned on partial observations only. The latter is a generalization of POMDPs, which is intractable in general. We develop a memory and computationally efficient policy gradient method to address a first-order relaxation of the objective defined on belief states, providing various formal characterizations of approximation gaps, the optimization landscape, and the hallucination problem. This paper aims to generalize state entropy maximization to more realistic domains that meet the challenges of applications.
APA
Zamboni, R., Cirino, D., Restelli, M. & Mutti, M.. (2024). How to Explore with Belief: State Entropy Maximization in POMDPs. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:58140-58157 Available from https://proceedings.mlr.press/v235/zamboni24a.html.

Related Material