Competing with an Infinite Set of Models in Reinforcement Learning

Phuong Nguyen, Odalric-Ambrym Maillard, Daniil Ryabko, Ronald Ortner
Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, PMLR 31:463-471, 2013.

Abstract

We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T^2/3, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound.

Cite this Paper


BibTeX
@InProceedings{pmlr-v31-nguyen13a, title = {Competing with an Infinite Set of Models in Reinforcement Learning}, author = {Nguyen, Phuong and Maillard, Odalric-Ambrym and Ryabko, Daniil and Ortner, Ronald}, booktitle = {Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics}, pages = {463--471}, year = {2013}, editor = {Carvalho, Carlos M. and Ravikumar, Pradeep}, volume = {31}, series = {Proceedings of Machine Learning Research}, address = {Scottsdale, Arizona, USA}, month = {29 Apr--01 May}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v31/nguyen13a.pdf}, url = {https://proceedings.mlr.press/v31/nguyen13a.html}, abstract = {We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T^2/3, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound.} }
Endnote
%0 Conference Paper %T Competing with an Infinite Set of Models in Reinforcement Learning %A Phuong Nguyen %A Odalric-Ambrym Maillard %A Daniil Ryabko %A Ronald Ortner %B Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2013 %E Carlos M. Carvalho %E Pradeep Ravikumar %F pmlr-v31-nguyen13a %I PMLR %P 463--471 %U https://proceedings.mlr.press/v31/nguyen13a.html %V 31 %X We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T^2/3, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound.
RIS
TY - CPAPER TI - Competing with an Infinite Set of Models in Reinforcement Learning AU - Phuong Nguyen AU - Odalric-Ambrym Maillard AU - Daniil Ryabko AU - Ronald Ortner BT - Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics DA - 2013/04/29 ED - Carlos M. Carvalho ED - Pradeep Ravikumar ID - pmlr-v31-nguyen13a PB - PMLR DP - Proceedings of Machine Learning Research VL - 31 SP - 463 EP - 471 L1 - http://proceedings.mlr.press/v31/nguyen13a.pdf UR - https://proceedings.mlr.press/v31/nguyen13a.html AB - We consider a reinforcement learning setting where the learner also has to deal with the problem of finding a suitable state-representation function from a given set of models. This has to be done while interacting with the environment in an online fashion (no resets), and the goal is to have small regret with respect to any Markov model in the set. For this setting, recently the BLB algorithm has been proposed, which achieves regret of order T^2/3, provided that the given set of models is finite. Our first contribution is to extend this result to a countably infinite set of models. Moreover, the BLB regret bound suffers from an additive term that can be exponential in the diameter of the MDP involved, since the diameter has to be guessed. The algorithm we propose avoids guessing the diameter, thus improving the regret bound. ER -
APA
Nguyen, P., Maillard, O., Ryabko, D. & Ortner, R.. (2013). Competing with an Infinite Set of Models in Reinforcement Learning. Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 31:463-471 Available from https://proceedings.mlr.press/v31/nguyen13a.html.

Related Material