Offline RL Policies Should Be Trained to be Adaptive

Dibya Ghosh, Anurag Ajay, Pulkit Agrawal, Sergey Levine
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:7513-7530, 2022.

Abstract

Offline RL algorithms must account for the fact that the dataset they are provided may leave many facets of the environment unknown. The most common way to approach this challenge is to employ pessimistic or conservative methods, which avoid behaviors that are too dissimilar from those in the training dataset. However, relying exclusively on conservatism has drawbacks: performance is sensitive to the exact degree of conservatism, and conservative objectives can recover highly suboptimal policies. In this work, we propose that offline RL methods should instead be adaptive in the presence of uncertainty. We show that acting optimally in offline RL in a Bayesian sense involves solving an implicit POMDP. As a result, optimal policies for offline RL must be adaptive, depending not just on the current state but rather all the transitions seen so far during evaluation. We present a model-free algorithm for approximating this optimal adaptive policy, and demonstrate the efficacy of learning such adaptive policies in offline RL benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-ghosh22a, title = {Offline {RL} Policies Should Be Trained to be Adaptive}, author = {Ghosh, Dibya and Ajay, Anurag and Agrawal, Pulkit and Levine, Sergey}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {7513--7530}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/ghosh22a/ghosh22a.pdf}, url = {https://proceedings.mlr.press/v162/ghosh22a.html}, abstract = {Offline RL algorithms must account for the fact that the dataset they are provided may leave many facets of the environment unknown. The most common way to approach this challenge is to employ pessimistic or conservative methods, which avoid behaviors that are too dissimilar from those in the training dataset. However, relying exclusively on conservatism has drawbacks: performance is sensitive to the exact degree of conservatism, and conservative objectives can recover highly suboptimal policies. In this work, we propose that offline RL methods should instead be adaptive in the presence of uncertainty. We show that acting optimally in offline RL in a Bayesian sense involves solving an implicit POMDP. As a result, optimal policies for offline RL must be adaptive, depending not just on the current state but rather all the transitions seen so far during evaluation. We present a model-free algorithm for approximating this optimal adaptive policy, and demonstrate the efficacy of learning such adaptive policies in offline RL benchmarks.} }
Endnote
%0 Conference Paper %T Offline RL Policies Should Be Trained to be Adaptive %A Dibya Ghosh %A Anurag Ajay %A Pulkit Agrawal %A Sergey Levine %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-ghosh22a %I PMLR %P 7513--7530 %U https://proceedings.mlr.press/v162/ghosh22a.html %V 162 %X Offline RL algorithms must account for the fact that the dataset they are provided may leave many facets of the environment unknown. The most common way to approach this challenge is to employ pessimistic or conservative methods, which avoid behaviors that are too dissimilar from those in the training dataset. However, relying exclusively on conservatism has drawbacks: performance is sensitive to the exact degree of conservatism, and conservative objectives can recover highly suboptimal policies. In this work, we propose that offline RL methods should instead be adaptive in the presence of uncertainty. We show that acting optimally in offline RL in a Bayesian sense involves solving an implicit POMDP. As a result, optimal policies for offline RL must be adaptive, depending not just on the current state but rather all the transitions seen so far during evaluation. We present a model-free algorithm for approximating this optimal adaptive policy, and demonstrate the efficacy of learning such adaptive policies in offline RL benchmarks.
APA
Ghosh, D., Ajay, A., Agrawal, P. & Levine, S.. (2022). Offline RL Policies Should Be Trained to be Adaptive. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:7513-7530 Available from https://proceedings.mlr.press/v162/ghosh22a.html.

Related Material