Batch Value-function Approximation with Only Realizability

Tengyang Xie, Nan Jiang
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:11404-11413, 2021.

Abstract

We make progress in a long-standing problem of batch reinforcement learning (RL): learning Q* from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen & Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action-space partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-xie21d, title = {Batch Value-function Approximation with Only Realizability}, author = {Xie, Tengyang and Jiang, Nan}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {11404--11413}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/xie21d/xie21d.pdf}, url = {https://proceedings.mlr.press/v139/xie21d.html}, abstract = {We make progress in a long-standing problem of batch reinforcement learning (RL): learning Q* from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen & Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action-space partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.} }
Endnote
%0 Conference Paper %T Batch Value-function Approximation with Only Realizability %A Tengyang Xie %A Nan Jiang %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-xie21d %I PMLR %P 11404--11413 %U https://proceedings.mlr.press/v139/xie21d.html %V 139 %X We make progress in a long-standing problem of batch reinforcement learning (RL): learning Q* from an exploratory and polynomial-sized dataset, using a realizable and otherwise arbitrary function class. In fact, all existing algorithms demand function-approximation assumptions stronger than realizability, and the mounting negative evidence has led to a conjecture that sample-efficient learning is impossible in this setting (Chen & Jiang, 2019). Our algorithm, BVFT, breaks the hardness conjecture (albeit under a stronger notion of exploratory data) via a tournament procedure that reduces the learning problem to pairwise comparison, and solves the latter with the help of a state-action-space partition constructed from the compared functions. We also discuss how BVFT can be applied to model selection among other extensions and open problems.
APA
Xie, T. & Jiang, N.. (2021). Batch Value-function Approximation with Only Realizability. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:11404-11413 Available from https://proceedings.mlr.press/v139/xie21d.html.

Related Material