Model-based Bayesian reinforcement learning in large structured domains

Stéphane Ross, Joelle Pineau
Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, PMLR R6:476-483, 2008.

Abstract

Model-based Bayesian reinforcement learning has generated significant interest in the AI community as it provides an elegant solution to the optimal exploration-exploitation tradeoff in classical reinforcement learning. Unfortunately, the applicability of this type of approach has been limited to small domains due to the high complexity of reasoning about the joint posterior over model parameters. In this paper, we consider the use of factored representations combined with online planning techniques, to improve scalability of these methods. The main contribution of this paper is a Bayesian framework for learning the structure and parameters of a dynamical system, while also simultaneously planning a (near-)optimal sequence of actions.

Cite this Paper


BibTeX
@InProceedings{pmlr-vR6-ross08a, title = {Model-based Bayesian reinforcement learning in large structured domains}, author = {Ross, St\'{e}phane and Pineau, Joelle}, booktitle = {Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence}, pages = {476--483}, year = {2008}, editor = {McAllester, David A. and Myllymäki, Petri}, volume = {R6}, series = {Proceedings of Machine Learning Research}, month = {09--12 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/r6/main/assets/ross08a/ross08a.pdf}, url = {https://proceedings.mlr.press/r6/ross08a.html}, abstract = {Model-based Bayesian reinforcement learning has generated significant interest in the AI community as it provides an elegant solution to the optimal exploration-exploitation tradeoff in classical reinforcement learning. Unfortunately, the applicability of this type of approach has been limited to small domains due to the high complexity of reasoning about the joint posterior over model parameters. In this paper, we consider the use of factored representations combined with online planning techniques, to improve scalability of these methods. The main contribution of this paper is a Bayesian framework for learning the structure and parameters of a dynamical system, while also simultaneously planning a (near-)optimal sequence of actions.}, note = {Reissued by PMLR on 09 October 2024.} }
Endnote
%0 Conference Paper %T Model-based Bayesian reinforcement learning in large structured domains %A Stéphane Ross %A Joelle Pineau %B Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2008 %E David A. McAllester %E Petri Myllymäki %F pmlr-vR6-ross08a %I PMLR %P 476--483 %U https://proceedings.mlr.press/r6/ross08a.html %V R6 %X Model-based Bayesian reinforcement learning has generated significant interest in the AI community as it provides an elegant solution to the optimal exploration-exploitation tradeoff in classical reinforcement learning. Unfortunately, the applicability of this type of approach has been limited to small domains due to the high complexity of reasoning about the joint posterior over model parameters. In this paper, we consider the use of factored representations combined with online planning techniques, to improve scalability of these methods. The main contribution of this paper is a Bayesian framework for learning the structure and parameters of a dynamical system, while also simultaneously planning a (near-)optimal sequence of actions. %Z Reissued by PMLR on 09 October 2024.
APA
Ross, S. & Pineau, J.. (2008). Model-based Bayesian reinforcement learning in large structured domains. Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research R6:476-483 Available from https://proceedings.mlr.press/r6/ross08a.html. Reissued by PMLR on 09 October 2024.

Related Material