ABC Reinforcement Learning

Christos Dimitrakakis, Nikolaos Tziortziotis
Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):684-692, 2013.

Abstract

We introduce a simple, general framework for likelihood-free Bayesian reinforcement learning, through Approximate Bayesian Computation (ABC). The advantage is that we only require a prior distribution on a class of simulators. This is useful when a probabilistic model of the underlying process is too complex to formulate, but where detailed simulation models are available. ABC-RL allows the use of any Bayesian reinforcement learning technique in this case. It can be seen as an extension of simulation methods to both planning and inference. We experimentally demonstrate the potential of this approach in a comparison with LSPI. Finally, we introduce a theorem showing that ABC is sound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v28-dimitrakakis13, title = {ABC Reinforcement Learning}, author = {Dimitrakakis, Christos and Tziortziotis, Nikolaos}, booktitle = {Proceedings of the 30th International Conference on Machine Learning}, pages = {684--692}, year = {2013}, editor = {Dasgupta, Sanjoy and McAllester, David}, volume = {28}, number = {3}, series = {Proceedings of Machine Learning Research}, address = {Atlanta, Georgia, USA}, month = {17--19 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v28/dimitrakakis13.pdf}, url = {https://proceedings.mlr.press/v28/dimitrakakis13.html}, abstract = {We introduce a simple, general framework for likelihood-free Bayesian reinforcement learning, through Approximate Bayesian Computation (ABC). The advantage is that we only require a prior distribution on a class of simulators. This is useful when a probabilistic model of the underlying process is too complex to formulate, but where detailed simulation models are available. ABC-RL allows the use of any Bayesian reinforcement learning technique in this case. It can be seen as an extension of simulation methods to both planning and inference. We experimentally demonstrate the potential of this approach in a comparison with LSPI. Finally, we introduce a theorem showing that ABC is sound. } }
Endnote
%0 Conference Paper %T ABC Reinforcement Learning %A Christos Dimitrakakis %A Nikolaos Tziortziotis %B Proceedings of the 30th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2013 %E Sanjoy Dasgupta %E David McAllester %F pmlr-v28-dimitrakakis13 %I PMLR %P 684--692 %U https://proceedings.mlr.press/v28/dimitrakakis13.html %V 28 %N 3 %X We introduce a simple, general framework for likelihood-free Bayesian reinforcement learning, through Approximate Bayesian Computation (ABC). The advantage is that we only require a prior distribution on a class of simulators. This is useful when a probabilistic model of the underlying process is too complex to formulate, but where detailed simulation models are available. ABC-RL allows the use of any Bayesian reinforcement learning technique in this case. It can be seen as an extension of simulation methods to both planning and inference. We experimentally demonstrate the potential of this approach in a comparison with LSPI. Finally, we introduce a theorem showing that ABC is sound.
RIS
TY - CPAPER TI - ABC Reinforcement Learning AU - Christos Dimitrakakis AU - Nikolaos Tziortziotis BT - Proceedings of the 30th International Conference on Machine Learning DA - 2013/05/26 ED - Sanjoy Dasgupta ED - David McAllester ID - pmlr-v28-dimitrakakis13 PB - PMLR DP - Proceedings of Machine Learning Research VL - 28 IS - 3 SP - 684 EP - 692 L1 - http://proceedings.mlr.press/v28/dimitrakakis13.pdf UR - https://proceedings.mlr.press/v28/dimitrakakis13.html AB - We introduce a simple, general framework for likelihood-free Bayesian reinforcement learning, through Approximate Bayesian Computation (ABC). The advantage is that we only require a prior distribution on a class of simulators. This is useful when a probabilistic model of the underlying process is too complex to formulate, but where detailed simulation models are available. ABC-RL allows the use of any Bayesian reinforcement learning technique in this case. It can be seen as an extension of simulation methods to both planning and inference. We experimentally demonstrate the potential of this approach in a comparison with LSPI. Finally, we introduce a theorem showing that ABC is sound. ER -
APA
Dimitrakakis, C. & Tziortziotis, N.. (2013). ABC Reinforcement Learning. Proceedings of the 30th International Conference on Machine Learning, in Proceedings of Machine Learning Research 28(3):684-692 Available from https://proceedings.mlr.press/v28/dimitrakakis13.html.

Related Material