Offline Reinforcement Learning with Realizability and Single-policy Concentrability

Wenhao Zhan, Baihe Huang, Audrey Huang, Nan Jiang, Jason Lee
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:2730-2775, 2022.

Abstract

Sample-efficiency guarantees for offline reinforcement learning (RL) often rely on strong assumptions on both the function classes (e.g., Bellman-completeness) and the data coverage (e.g., all-policy concentrability). Despite the recent efforts on relaxing these assumptions, existing works are only able to relax one of the two factors, leaving the strong assumption on the other factor intact. As an important open problem, can we achieve sample-efficient offline RL with weak assumptions on both factors? In this paper we answer the question in the positive. We analyze a simple algorithm based on the primal-dual formulation of MDPs, where the dual variables (discounted occupancy) are modeled using a density-ratio function against offline data. With proper regularization, the algorithm enjoys polynomial sample complexity, under only realizability and single-policy concentrability. We also provide alternative analyses based on different assumptions to shed light on the nature of primal-dual algorithms for offline RL.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-zhan22a, title = {Offline Reinforcement Learning with Realizability and Single-policy Concentrability}, author = {Zhan, Wenhao and Huang, Baihe and Huang, Audrey and Jiang, Nan and Lee, Jason}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {2730--2775}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/zhan22a/zhan22a.pdf}, url = {https://proceedings.mlr.press/v178/zhan22a.html}, abstract = {Sample-efficiency guarantees for offline reinforcement learning (RL) often rely on strong assumptions on both the function classes (e.g., Bellman-completeness) and the data coverage (e.g., all-policy concentrability). Despite the recent efforts on relaxing these assumptions, existing works are only able to relax one of the two factors, leaving the strong assumption on the other factor intact. As an important open problem, can we achieve sample-efficient offline RL with weak assumptions on both factors? In this paper we answer the question in the positive. We analyze a simple algorithm based on the primal-dual formulation of MDPs, where the dual variables (discounted occupancy) are modeled using a density-ratio function against offline data. With proper regularization, the algorithm enjoys polynomial sample complexity, under only realizability and single-policy concentrability. We also provide alternative analyses based on different assumptions to shed light on the nature of primal-dual algorithms for offline RL.} }
Endnote
%0 Conference Paper %T Offline Reinforcement Learning with Realizability and Single-policy Concentrability %A Wenhao Zhan %A Baihe Huang %A Audrey Huang %A Nan Jiang %A Jason Lee %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-zhan22a %I PMLR %P 2730--2775 %U https://proceedings.mlr.press/v178/zhan22a.html %V 178 %X Sample-efficiency guarantees for offline reinforcement learning (RL) often rely on strong assumptions on both the function classes (e.g., Bellman-completeness) and the data coverage (e.g., all-policy concentrability). Despite the recent efforts on relaxing these assumptions, existing works are only able to relax one of the two factors, leaving the strong assumption on the other factor intact. As an important open problem, can we achieve sample-efficient offline RL with weak assumptions on both factors? In this paper we answer the question in the positive. We analyze a simple algorithm based on the primal-dual formulation of MDPs, where the dual variables (discounted occupancy) are modeled using a density-ratio function against offline data. With proper regularization, the algorithm enjoys polynomial sample complexity, under only realizability and single-policy concentrability. We also provide alternative analyses based on different assumptions to shed light on the nature of primal-dual algorithms for offline RL.
APA
Zhan, W., Huang, B., Huang, A., Jiang, N. & Lee, J.. (2022). Offline Reinforcement Learning with Realizability and Single-policy Concentrability. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:2730-2775 Available from https://proceedings.mlr.press/v178/zhan22a.html.

Related Material