Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage

Kishan Panaganti, Zaiyan Xu, Dileep Kalathil, Mohammad Ghavamzadeh
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:619-634, 2025.

Abstract

The goal of an offline reinforcement learning (RL) algorithm is to learn the optimal policy using offline data, without access to the environment for online exploration. One of the main challenges in offline RL is the distribution shift which refers to the difference between the state-action visitation distribution of the data generating policy and the learning policy. Many recent works have used the idea of pessimism for developing offline RL algorithms and characterizing their sample complexity under a relatively weak assumption of single policy concentrability. Different from the offline RL literature, the area of distributionally robust learning (DRL) offers a principled framework that uses a minimax formulation to tackle model mismatch between training and testing environments. In this work, we aim to bridge these two areas by showing that the DRL approach can tackle the distributional shift problem in offline RL. In particular, we propose two offline RL algorithms using the DRL framework, for the tabular and linear function approximation settings, and characterize their sample complexity under the single policy concentrability assumption. We also demonstrate the performance of our algorithm through simulation experiments and by comparing it with other state-of-the-art tabular offline RL algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-panaganti25a, title = {Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage}, author = {Panaganti, Kishan and Xu, Zaiyan and Kalathil, Dileep and Ghavamzadeh, Mohammad}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {619--634}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/panaganti25a/panaganti25a.pdf}, url = {https://proceedings.mlr.press/v283/panaganti25a.html}, abstract = {The goal of an offline reinforcement learning (RL) algorithm is to learn the optimal policy using offline data, without access to the environment for online exploration. One of the main challenges in offline RL is the distribution shift which refers to the difference between the state-action visitation distribution of the data generating policy and the learning policy. Many recent works have used the idea of pessimism for developing offline RL algorithms and characterizing their sample complexity under a relatively weak assumption of single policy concentrability. Different from the offline RL literature, the area of distributionally robust learning (DRL) offers a principled framework that uses a minimax formulation to tackle model mismatch between training and testing environments. In this work, we aim to bridge these two areas by showing that the DRL approach can tackle the distributional shift problem in offline RL. In particular, we propose two offline RL algorithms using the DRL framework, for the tabular and linear function approximation settings, and characterize their sample complexity under the single policy concentrability assumption. We also demonstrate the performance of our algorithm through simulation experiments and by comparing it with other state-of-the-art tabular offline RL algorithms.} }
Endnote
%0 Conference Paper %T Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage %A Kishan Panaganti %A Zaiyan Xu %A Dileep Kalathil %A Mohammad Ghavamzadeh %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-panaganti25a %I PMLR %P 619--634 %U https://proceedings.mlr.press/v283/panaganti25a.html %V 283 %X The goal of an offline reinforcement learning (RL) algorithm is to learn the optimal policy using offline data, without access to the environment for online exploration. One of the main challenges in offline RL is the distribution shift which refers to the difference between the state-action visitation distribution of the data generating policy and the learning policy. Many recent works have used the idea of pessimism for developing offline RL algorithms and characterizing their sample complexity under a relatively weak assumption of single policy concentrability. Different from the offline RL literature, the area of distributionally robust learning (DRL) offers a principled framework that uses a minimax formulation to tackle model mismatch between training and testing environments. In this work, we aim to bridge these two areas by showing that the DRL approach can tackle the distributional shift problem in offline RL. In particular, we propose two offline RL algorithms using the DRL framework, for the tabular and linear function approximation settings, and characterize their sample complexity under the single policy concentrability assumption. We also demonstrate the performance of our algorithm through simulation experiments and by comparing it with other state-of-the-art tabular offline RL algorithms.
APA
Panaganti, K., Xu, Z., Kalathil, D. & Ghavamzadeh, M.. (2025). Bridging Distributionally Robust Learning and Offline RL: An Approach to Mitigate Distribution Shift and Partial Data Coverage. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:619-634 Available from https://proceedings.mlr.press/v283/panaganti25a.html.

Related Material