Abstraction Selection in Model-based Reinforcement Learning

Nan Jiang, Alex Kulesza, Satinder Singh
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:179-188, 2015.

Abstract

State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon.

Cite this Paper


BibTeX
@InProceedings{pmlr-v37-jiang15, title = {Abstraction Selection in Model-based Reinforcement Learning}, author = {Jiang, Nan and Kulesza, Alex and Singh, Satinder}, booktitle = {Proceedings of the 32nd International Conference on Machine Learning}, pages = {179--188}, year = {2015}, editor = {Bach, Francis and Blei, David}, volume = {37}, series = {Proceedings of Machine Learning Research}, address = {Lille, France}, month = {07--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v37/jiang15.pdf}, url = {https://proceedings.mlr.press/v37/jiang15.html}, abstract = {State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon.} }
Endnote
%0 Conference Paper %T Abstraction Selection in Model-based Reinforcement Learning %A Nan Jiang %A Alex Kulesza %A Satinder Singh %B Proceedings of the 32nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2015 %E Francis Bach %E David Blei %F pmlr-v37-jiang15 %I PMLR %P 179--188 %U https://proceedings.mlr.press/v37/jiang15.html %V 37 %X State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon.
RIS
TY - CPAPER TI - Abstraction Selection in Model-based Reinforcement Learning AU - Nan Jiang AU - Alex Kulesza AU - Satinder Singh BT - Proceedings of the 32nd International Conference on Machine Learning DA - 2015/06/01 ED - Francis Bach ED - David Blei ID - pmlr-v37-jiang15 PB - PMLR DP - Proceedings of Machine Learning Research VL - 37 SP - 179 EP - 188 L1 - http://proceedings.mlr.press/v37/jiang15.pdf UR - https://proceedings.mlr.press/v37/jiang15.html AB - State abstractions are often used to reduce the complexity of model-based reinforcement learning when only limited quantities of data are available. However, choosing the appropriate level of abstraction is an important problem in practice. Existing approaches have theoretical guarantees only under strong assumptions on the domain or asymptotically large amounts of data, but in this paper we propose a simple algorithm based on statistical hypothesis testing that comes with a finite-sample guarantee under assumptions on candidate abstractions. Our algorithm trades off the low approximation error of finer abstractions against the low estimation error of coarser abstractions, resulting in a loss bound that depends only on the quality of the best available abstraction and is polynomial in planning horizon. ER -
APA
Jiang, N., Kulesza, A. & Singh, S.. (2015). Abstraction Selection in Model-based Reinforcement Learning. Proceedings of the 32nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 37:179-188 Available from https://proceedings.mlr.press/v37/jiang15.html.

Related Material