Online Learning for Unknown Partially Observable MDPs

Mehdi Jafarnia Jahromi, Rahul Jain, Ashutosh Nayyar
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:1712-1732, 2022.

Abstract

Solving Partially Observable Markov Decision Processes (POMDPs) is hard. Learning optimal controllers for POMDPs when the model is unknown is harder. Online learning of optimal controllers for unknown POMDPs, which requires efficient learning using regret-minimizing algorithms that effectively tradeoff exploration and exploitation, is even harder, and no solution exists currently. In this paper, we consider infinite-horizon average-cost POMDPs with unknown transition model, though a known observation model. We propose a natural posterior sampling-based reinforcement learning algorithm (PSRL-POMDP) and show that it achieves a regret bound of $O(\log T)$, where $T$ is the time horizon, when the parameter set is finite. In the general case (continuous parameter set), we show that the algorithm achieves $O(T^{2/3})$ regret under two technical assumptions. To the best of our knowledge, this is the first online RL algorithm for POMDPs and has sub-linear regret.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-jafarnia-jahromi22a, title = { Online Learning for Unknown Partially Observable MDPs }, author = {Jafarnia Jahromi, Mehdi and Jain, Rahul and Nayyar, Ashutosh}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {1712--1732}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/jafarnia-jahromi22a/jafarnia-jahromi22a.pdf}, url = {https://proceedings.mlr.press/v151/jafarnia-jahromi22a.html}, abstract = { Solving Partially Observable Markov Decision Processes (POMDPs) is hard. Learning optimal controllers for POMDPs when the model is unknown is harder. Online learning of optimal controllers for unknown POMDPs, which requires efficient learning using regret-minimizing algorithms that effectively tradeoff exploration and exploitation, is even harder, and no solution exists currently. In this paper, we consider infinite-horizon average-cost POMDPs with unknown transition model, though a known observation model. We propose a natural posterior sampling-based reinforcement learning algorithm (PSRL-POMDP) and show that it achieves a regret bound of $O(\log T)$, where $T$ is the time horizon, when the parameter set is finite. In the general case (continuous parameter set), we show that the algorithm achieves $O(T^{2/3})$ regret under two technical assumptions. To the best of our knowledge, this is the first online RL algorithm for POMDPs and has sub-linear regret. } }
Endnote
%0 Conference Paper %T Online Learning for Unknown Partially Observable MDPs %A Mehdi Jafarnia Jahromi %A Rahul Jain %A Ashutosh Nayyar %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-jafarnia-jahromi22a %I PMLR %P 1712--1732 %U https://proceedings.mlr.press/v151/jafarnia-jahromi22a.html %V 151 %X Solving Partially Observable Markov Decision Processes (POMDPs) is hard. Learning optimal controllers for POMDPs when the model is unknown is harder. Online learning of optimal controllers for unknown POMDPs, which requires efficient learning using regret-minimizing algorithms that effectively tradeoff exploration and exploitation, is even harder, and no solution exists currently. In this paper, we consider infinite-horizon average-cost POMDPs with unknown transition model, though a known observation model. We propose a natural posterior sampling-based reinforcement learning algorithm (PSRL-POMDP) and show that it achieves a regret bound of $O(\log T)$, where $T$ is the time horizon, when the parameter set is finite. In the general case (continuous parameter set), we show that the algorithm achieves $O(T^{2/3})$ regret under two technical assumptions. To the best of our knowledge, this is the first online RL algorithm for POMDPs and has sub-linear regret.
APA
Jafarnia Jahromi, M., Jain, R. & Nayyar, A.. (2022). Online Learning for Unknown Partially Observable MDPs . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:1712-1732 Available from https://proceedings.mlr.press/v151/jafarnia-jahromi22a.html.

Related Material