PC-MLP: Model-based Reinforcement Learning with Policy Cover Guided Exploration

Yuda Song, Wen Sun
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:9801-9811, 2021.

Abstract

Model-based Reinforcement Learning (RL) is a popular learning paradigm due to its potential sample efficiency compared to model-free RL. However, existing empirical model-based RL approaches lack the ability to explore. This work studies a computationally and statistically efficient model-based algorithm for both Kernelized Nonlinear Regulators (KNR) and linear Markov Decision Processes (MDPs). For both models, our algorithm guarantees polynomial sample complexity and only uses access to a planning oracle. Experimentally, we first demonstrate the flexibility and the efficacy of our algorithm on a set of exploration challenging control tasks where existing empirical model-based RL approaches completely fail. We then show that our approach retains excellent performance even in common dense reward control benchmarks that do not require heavy exploration.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-song21b, title = {PC-MLP: Model-based Reinforcement Learning with Policy Cover Guided Exploration}, author = {Song, Yuda and Sun, Wen}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {9801--9811}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/song21b/song21b.pdf}, url = {https://proceedings.mlr.press/v139/song21b.html}, abstract = {Model-based Reinforcement Learning (RL) is a popular learning paradigm due to its potential sample efficiency compared to model-free RL. However, existing empirical model-based RL approaches lack the ability to explore. This work studies a computationally and statistically efficient model-based algorithm for both Kernelized Nonlinear Regulators (KNR) and linear Markov Decision Processes (MDPs). For both models, our algorithm guarantees polynomial sample complexity and only uses access to a planning oracle. Experimentally, we first demonstrate the flexibility and the efficacy of our algorithm on a set of exploration challenging control tasks where existing empirical model-based RL approaches completely fail. We then show that our approach retains excellent performance even in common dense reward control benchmarks that do not require heavy exploration.} }
Endnote
%0 Conference Paper %T PC-MLP: Model-based Reinforcement Learning with Policy Cover Guided Exploration %A Yuda Song %A Wen Sun %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-song21b %I PMLR %P 9801--9811 %U https://proceedings.mlr.press/v139/song21b.html %V 139 %X Model-based Reinforcement Learning (RL) is a popular learning paradigm due to its potential sample efficiency compared to model-free RL. However, existing empirical model-based RL approaches lack the ability to explore. This work studies a computationally and statistically efficient model-based algorithm for both Kernelized Nonlinear Regulators (KNR) and linear Markov Decision Processes (MDPs). For both models, our algorithm guarantees polynomial sample complexity and only uses access to a planning oracle. Experimentally, we first demonstrate the flexibility and the efficacy of our algorithm on a set of exploration challenging control tasks where existing empirical model-based RL approaches completely fail. We then show that our approach retains excellent performance even in common dense reward control benchmarks that do not require heavy exploration.
APA
Song, Y. & Sun, W.. (2021). PC-MLP: Model-based Reinforcement Learning with Policy Cover Guided Exploration. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:9801-9811 Available from https://proceedings.mlr.press/v139/song21b.html.

Related Material