Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison

Tengyang Xie, Nan Jiang
Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), PMLR 124:550-559, 2020.

Abstract

We prove performance guarantees of two algorithms for approximating Q* in batch reinforcement learning. Compared to classical iterative methods such as Fitted Q-Iteration—whose performance loss incurs quadratic dependence on horizon—these methods estimate (some forms of) the Bellman error and enjoy linear-in-horizon error propagation, a property established for the first time for algorithms that rely solely on batch data and output stationary policies. One of the algorithms uses a novel and explicit importance-weighting correction to overcome the infamous "double sampling" difficulty in Bellman error estimation, and does not use any squared losses. Our analyses reveal its distinct characteristics and potential advantages compared to classical algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v124-xie20a, title = {Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison}, author = {Xie, Tengyang and Jiang, Nan}, booktitle = {Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI)}, pages = {550--559}, year = {2020}, editor = {Jonas Peters and David Sontag}, volume = {124}, series = {Proceedings of Machine Learning Research}, month = {03--06 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v124/xie20a/xie20a.pdf}, url = { http://proceedings.mlr.press/v124/xie20a.html }, abstract = {We prove performance guarantees of two algorithms for approximating Q* in batch reinforcement learning. Compared to classical iterative methods such as Fitted Q-Iteration—whose performance loss incurs quadratic dependence on horizon—these methods estimate (some forms of) the Bellman error and enjoy linear-in-horizon error propagation, a property established for the first time for algorithms that rely solely on batch data and output stationary policies. One of the algorithms uses a novel and explicit importance-weighting correction to overcome the infamous "double sampling" difficulty in Bellman error estimation, and does not use any squared losses. Our analyses reveal its distinct characteristics and potential advantages compared to classical algorithms. } }
Endnote
%0 Conference Paper %T Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison %A Tengyang Xie %A Nan Jiang %B Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI) %C Proceedings of Machine Learning Research %D 2020 %E Jonas Peters %E David Sontag %F pmlr-v124-xie20a %I PMLR %P 550--559 %U http://proceedings.mlr.press/v124/xie20a.html %V 124 %X We prove performance guarantees of two algorithms for approximating Q* in batch reinforcement learning. Compared to classical iterative methods such as Fitted Q-Iteration—whose performance loss incurs quadratic dependence on horizon—these methods estimate (some forms of) the Bellman error and enjoy linear-in-horizon error propagation, a property established for the first time for algorithms that rely solely on batch data and output stationary policies. One of the algorithms uses a novel and explicit importance-weighting correction to overcome the infamous "double sampling" difficulty in Bellman error estimation, and does not use any squared losses. Our analyses reveal its distinct characteristics and potential advantages compared to classical algorithms.
APA
Xie, T. & Jiang, N.. (2020). Q* Approximation Schemes for Batch Reinforcement Learning: A Theoretical Comparison. Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), in Proceedings of Machine Learning Research 124:550-559 Available from http://proceedings.mlr.press/v124/xie20a.html .

Related Material