When Is Partially Observable Reinforcement Learning Not Scary?

Qinghua Liu, Alan Chung, Csaba Szepesvari, Chi Jin
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:5175-5220, 2022.

Abstract

Partial observability is ubiquitous in applications of Reinforcement Learning (RL), in which agents learn to make a sequence of decisions despite lacking complete information about the latent states of the controlled system. Partially observable RL is notoriously difficult in theory—well-known complexity-theoretic results show that learning partially observable Markov decision processes (POMDPs) requires an exponential number of samples in the worst case. Yet, this does not rule out the possible existence of interesting subclasses of POMDPs, which include a large set of partial observable applications in practice while being tractable. In this paper we identify a rich family of tractable POMDPs, which we call weakly revealing POMDPs. This family rules out the pathological instances of POMDPs with non-informative observations. We prove that for weakly revealing POMDPs, a simple algorithm combining optimism and Maximum Likelihood Estimation (MLE) is sufficient to guarantee a polynomial sample complexity. To the best of our knowledge, this is the first provably sample-efficient result for learning in overcomplete POMDPs—where the number of latent states can be larger than the number of observations—in settings where exploration is necessary.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-liu22f, title = {When Is Partially Observable Reinforcement Learning Not Scary?}, author = {Liu, Qinghua and Chung, Alan and Szepesvari, Csaba and Jin, Chi}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {5175--5220}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/liu22f/liu22f.pdf}, url = {https://proceedings.mlr.press/v178/liu22f.html}, abstract = {Partial observability is ubiquitous in applications of Reinforcement Learning (RL), in which agents learn to make a sequence of decisions despite lacking complete information about the latent states of the controlled system. Partially observable RL is notoriously difficult in theory—well-known complexity-theoretic results show that learning partially observable Markov decision processes (POMDPs) requires an exponential number of samples in the worst case. Yet, this does not rule out the possible existence of interesting subclasses of POMDPs, which include a large set of partial observable applications in practice while being tractable. In this paper we identify a rich family of tractable POMDPs, which we call weakly revealing POMDPs. This family rules out the pathological instances of POMDPs with non-informative observations. We prove that for weakly revealing POMDPs, a simple algorithm combining optimism and Maximum Likelihood Estimation (MLE) is sufficient to guarantee a polynomial sample complexity. To the best of our knowledge, this is the first provably sample-efficient result for learning in overcomplete POMDPs—where the number of latent states can be larger than the number of observations—in settings where exploration is necessary.} }
Endnote
%0 Conference Paper %T When Is Partially Observable Reinforcement Learning Not Scary? %A Qinghua Liu %A Alan Chung %A Csaba Szepesvari %A Chi Jin %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-liu22f %I PMLR %P 5175--5220 %U https://proceedings.mlr.press/v178/liu22f.html %V 178 %X Partial observability is ubiquitous in applications of Reinforcement Learning (RL), in which agents learn to make a sequence of decisions despite lacking complete information about the latent states of the controlled system. Partially observable RL is notoriously difficult in theory—well-known complexity-theoretic results show that learning partially observable Markov decision processes (POMDPs) requires an exponential number of samples in the worst case. Yet, this does not rule out the possible existence of interesting subclasses of POMDPs, which include a large set of partial observable applications in practice while being tractable. In this paper we identify a rich family of tractable POMDPs, which we call weakly revealing POMDPs. This family rules out the pathological instances of POMDPs with non-informative observations. We prove that for weakly revealing POMDPs, a simple algorithm combining optimism and Maximum Likelihood Estimation (MLE) is sufficient to guarantee a polynomial sample complexity. To the best of our knowledge, this is the first provably sample-efficient result for learning in overcomplete POMDPs—where the number of latent states can be larger than the number of observations—in settings where exploration is necessary.
APA
Liu, Q., Chung, A., Szepesvari, C. & Jin, C.. (2022). When Is Partially Observable Reinforcement Learning Not Scary?. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:5175-5220 Available from https://proceedings.mlr.press/v178/liu22f.html.

Related Material