Provable Zero-Shot Generalization in Offline Reinforcement Learning

Zhiyong Wang, Chen Yang, John C.S. Lui, Dongruo Zhou
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:65122-65143, 2025.

Abstract

In this work, we study offline reinforcement learning (RL) with zero-shot generalization property (ZSG), where the agent has access to an offline dataset including experiences from different environments, and the goal of the agent is to train a policy over the training environments which performs well on test environments without further interaction. Existing work showed that classical offline RL fails to generalize to new, unseen environments. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. We show that both PERM and PPPO are capable of finding a near-optimal policy with ZSG. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-wang25dx, title = {Provable Zero-Shot Generalization in Offline Reinforcement Learning}, author = {Wang, Zhiyong and Yang, Chen and Lui, John C.S. and Zhou, Dongruo}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {65122--65143}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/wang25dx/wang25dx.pdf}, url = {https://proceedings.mlr.press/v267/wang25dx.html}, abstract = {In this work, we study offline reinforcement learning (RL) with zero-shot generalization property (ZSG), where the agent has access to an offline dataset including experiences from different environments, and the goal of the agent is to train a policy over the training environments which performs well on test environments without further interaction. Existing work showed that classical offline RL fails to generalize to new, unseen environments. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. We show that both PERM and PPPO are capable of finding a near-optimal policy with ZSG. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning.} }
Endnote
%0 Conference Paper %T Provable Zero-Shot Generalization in Offline Reinforcement Learning %A Zhiyong Wang %A Chen Yang %A John C.S. Lui %A Dongruo Zhou %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-wang25dx %I PMLR %P 65122--65143 %U https://proceedings.mlr.press/v267/wang25dx.html %V 267 %X In this work, we study offline reinforcement learning (RL) with zero-shot generalization property (ZSG), where the agent has access to an offline dataset including experiences from different environments, and the goal of the agent is to train a policy over the training environments which performs well on test environments without further interaction. Existing work showed that classical offline RL fails to generalize to new, unseen environments. We propose pessimistic empirical risk minimization (PERM) and pessimistic proximal policy optimization (PPPO), which leverage pessimistic policy evaluation to guide policy learning and enhance generalization. We show that both PERM and PPPO are capable of finding a near-optimal policy with ZSG. Our result serves as a first step in understanding the foundation of the generalization phenomenon in offline reinforcement learning.
APA
Wang, Z., Yang, C., Lui, J.C. & Zhou, D.. (2025). Provable Zero-Shot Generalization in Offline Reinforcement Learning. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:65122-65143 Available from https://proceedings.mlr.press/v267/wang25dx.html.

Related Material