Near-Optimal Learning and Planning in Separated Latent MDPs

Fan Chen, Constantinos Daskalakis, Noah Golowich, Alexander Rakhlin
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:995-1067, 2024.

Abstract

We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs). In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs. To sidestep known impossibility results, we consider several notions of $\delta$-separation of the constituent MDPs. The main thrust of this paper is in establishing a nearly-sharp \textit{statistical threshold} for the horizon length necessary for efficient learning. On the computational side, we show that under a weaker assumption of separability under the optimal policy, there is a quasi-polynomial algorithm with time complexity scaling in terms of the statistical threshold. We further show a near-matching time complexity lower bound under the exponential time hypothesis.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-chen24c, title = {Near-Optimal Learning and Planning in Separated Latent MDPs}, author = {Chen, Fan and Daskalakis, Constantinos and Golowich, Noah and Rakhlin, Alexander}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {995--1067}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/chen24c/chen24c.pdf}, url = {https://proceedings.mlr.press/v247/chen24c.html}, abstract = {We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs). In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs. To sidestep known impossibility results, we consider several notions of $\delta$-separation of the constituent MDPs. The main thrust of this paper is in establishing a nearly-sharp \textit{statistical threshold} for the horizon length necessary for efficient learning. On the computational side, we show that under a weaker assumption of separability under the optimal policy, there is a quasi-polynomial algorithm with time complexity scaling in terms of the statistical threshold. We further show a near-matching time complexity lower bound under the exponential time hypothesis.} }
Endnote
%0 Conference Paper %T Near-Optimal Learning and Planning in Separated Latent MDPs %A Fan Chen %A Constantinos Daskalakis %A Noah Golowich %A Alexander Rakhlin %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-chen24c %I PMLR %P 995--1067 %U https://proceedings.mlr.press/v247/chen24c.html %V 247 %X We study computational and statistical aspects of learning Latent Markov Decision Processes (LMDPs). In this model, the learner interacts with an MDP drawn at the beginning of each epoch from an unknown mixture of MDPs. To sidestep known impossibility results, we consider several notions of $\delta$-separation of the constituent MDPs. The main thrust of this paper is in establishing a nearly-sharp \textit{statistical threshold} for the horizon length necessary for efficient learning. On the computational side, we show that under a weaker assumption of separability under the optimal policy, there is a quasi-polynomial algorithm with time complexity scaling in terms of the statistical threshold. We further show a near-matching time complexity lower bound under the exponential time hypothesis.
APA
Chen, F., Daskalakis, C., Golowich, N. & Rakhlin, A.. (2024). Near-Optimal Learning and Planning in Separated Latent MDPs. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:995-1067 Available from https://proceedings.mlr.press/v247/chen24c.html.

Related Material