Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning approach

Xuezhou Zhang, Yuda Song, Masatoshi Uehara, Mengdi Wang, Alekh Agarwal, Wen Sun
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26517-26547, 2022.

Abstract

We present BRIEE, an algorithm for efficient reinforcement learning in Markov Decision Processes with block-structured dynamics (i.e., Block MDPs), where rich observations are generated from a set of unknown latent states. BRIEE interleaves latent states discovery, exploration, and exploitation together, and can provably learn a near-optimal policy with sample complexity scaling polynomially in the number of latent states, actions, and the time horizon, with no dependence on the size of the potentially infinite observation space. Empirically, we show that BRIEE is more sample efficient than the state-of-art Block MDP algorithm HOMER and other empirical RL baselines on challenging rich-observation combination lock problems which require deep exploration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-zhang22aa, title = {Efficient Reinforcement Learning in Block {MDP}s: A Model-free Representation Learning approach}, author = {Zhang, Xuezhou and Song, Yuda and Uehara, Masatoshi and Wang, Mengdi and Agarwal, Alekh and Sun, Wen}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {26517--26547}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/zhang22aa/zhang22aa.pdf}, url = {https://proceedings.mlr.press/v162/zhang22aa.html}, abstract = {We present BRIEE, an algorithm for efficient reinforcement learning in Markov Decision Processes with block-structured dynamics (i.e., Block MDPs), where rich observations are generated from a set of unknown latent states. BRIEE interleaves latent states discovery, exploration, and exploitation together, and can provably learn a near-optimal policy with sample complexity scaling polynomially in the number of latent states, actions, and the time horizon, with no dependence on the size of the potentially infinite observation space. Empirically, we show that BRIEE is more sample efficient than the state-of-art Block MDP algorithm HOMER and other empirical RL baselines on challenging rich-observation combination lock problems which require deep exploration.} }
Endnote
%0 Conference Paper %T Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning approach %A Xuezhou Zhang %A Yuda Song %A Masatoshi Uehara %A Mengdi Wang %A Alekh Agarwal %A Wen Sun %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-zhang22aa %I PMLR %P 26517--26547 %U https://proceedings.mlr.press/v162/zhang22aa.html %V 162 %X We present BRIEE, an algorithm for efficient reinforcement learning in Markov Decision Processes with block-structured dynamics (i.e., Block MDPs), where rich observations are generated from a set of unknown latent states. BRIEE interleaves latent states discovery, exploration, and exploitation together, and can provably learn a near-optimal policy with sample complexity scaling polynomially in the number of latent states, actions, and the time horizon, with no dependence on the size of the potentially infinite observation space. Empirically, we show that BRIEE is more sample efficient than the state-of-art Block MDP algorithm HOMER and other empirical RL baselines on challenging rich-observation combination lock problems which require deep exploration.
APA
Zhang, X., Song, Y., Uehara, M., Wang, M., Agarwal, A. & Sun, W.. (2022). Efficient Reinforcement Learning in Block MDPs: A Model-free Representation Learning approach. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:26517-26547 Available from https://proceedings.mlr.press/v162/zhang22aa.html.

Related Material