Information-Theoretic Considerations in Batch Reinforcement Learning

Jinglin Chen, Nan Jiang
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:1042-1051, 2019.

Abstract

Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods often crucially rely on two types of assumptions: (1) mild distribution shift, and (2) representation conditions that are stronger than realizability. However, the necessity (“why do we need them?”) and the naturalness (“when do they hold?”) of such assumptions have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-chen19e, title = {Information-Theoretic Considerations in Batch Reinforcement Learning}, author = {Chen, Jinglin and Jiang, Nan}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {1042--1051}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/chen19e/chen19e.pdf}, url = {https://proceedings.mlr.press/v97/chen19e.html}, abstract = {Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods often crucially rely on two types of assumptions: (1) mild distribution shift, and (2) representation conditions that are stronger than realizability. However, the necessity (“why do we need them?”) and the naturalness (“when do they hold?”) of such assumptions have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation.} }
Endnote
%0 Conference Paper %T Information-Theoretic Considerations in Batch Reinforcement Learning %A Jinglin Chen %A Nan Jiang %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-chen19e %I PMLR %P 1042--1051 %U https://proceedings.mlr.press/v97/chen19e.html %V 97 %X Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods often crucially rely on two types of assumptions: (1) mild distribution shift, and (2) representation conditions that are stronger than realizability. However, the necessity (“why do we need them?”) and the naturalness (“when do they hold?”) of such assumptions have largely eluded the literature. In this paper, we revisit these assumptions and provide theoretical results towards answering the above questions, and make steps towards a deeper understanding of value-function approximation.
APA
Chen, J. & Jiang, N.. (2019). Information-Theoretic Considerations in Batch Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:1042-1051 Available from https://proceedings.mlr.press/v97/chen19e.html.

Related Material