What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL?

Rui Yang, Lin Yong, Xiaoteng Ma, Hao Hu, Chongjie Zhang, Tong Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:39543-39571, 2023.

Abstract

Offline goal-conditioned RL (GCRL) offers a way to train general-purpose agents from fully offline datasets. In addition to being conservative within the dataset, the generalization ability to achieve unseen goals is another fundamental challenge for offline GCRL. However, to the best of our knowledge, this problem has not been well studied yet. In this paper, we study out-of-distribution (OOD) generalization of offline GCRL both theoretically and empirically to identify factors that are important. In a number of experiments, we observe that weighted imitation learning enjoys better generalization than pessimism-based offline RL method. Based on this insight, we derive a theory for OOD generalization, which characterizes several important design choices. We then propose a new offline GCRL method, Generalizable Offline goAl-condiTioned RL (GOAT), by combining the findings from our theoretical and empirical studies. On a new benchmark containing 9 independent identically distributed (IID) tasks and 17 OOD tasks, GOAT outperforms current state-of-the-art methods by a large margin.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-yang23q, title = {What is Essential for Unseen Goal Generalization of Offline Goal-conditioned {RL}?}, author = {Yang, Rui and Yong, Lin and Ma, Xiaoteng and Hu, Hao and Zhang, Chongjie and Zhang, Tong}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {39543--39571}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/yang23q/yang23q.pdf}, url = {https://proceedings.mlr.press/v202/yang23q.html}, abstract = {Offline goal-conditioned RL (GCRL) offers a way to train general-purpose agents from fully offline datasets. In addition to being conservative within the dataset, the generalization ability to achieve unseen goals is another fundamental challenge for offline GCRL. However, to the best of our knowledge, this problem has not been well studied yet. In this paper, we study out-of-distribution (OOD) generalization of offline GCRL both theoretically and empirically to identify factors that are important. In a number of experiments, we observe that weighted imitation learning enjoys better generalization than pessimism-based offline RL method. Based on this insight, we derive a theory for OOD generalization, which characterizes several important design choices. We then propose a new offline GCRL method, Generalizable Offline goAl-condiTioned RL (GOAT), by combining the findings from our theoretical and empirical studies. On a new benchmark containing 9 independent identically distributed (IID) tasks and 17 OOD tasks, GOAT outperforms current state-of-the-art methods by a large margin.} }
Endnote
%0 Conference Paper %T What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL? %A Rui Yang %A Lin Yong %A Xiaoteng Ma %A Hao Hu %A Chongjie Zhang %A Tong Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-yang23q %I PMLR %P 39543--39571 %U https://proceedings.mlr.press/v202/yang23q.html %V 202 %X Offline goal-conditioned RL (GCRL) offers a way to train general-purpose agents from fully offline datasets. In addition to being conservative within the dataset, the generalization ability to achieve unseen goals is another fundamental challenge for offline GCRL. However, to the best of our knowledge, this problem has not been well studied yet. In this paper, we study out-of-distribution (OOD) generalization of offline GCRL both theoretically and empirically to identify factors that are important. In a number of experiments, we observe that weighted imitation learning enjoys better generalization than pessimism-based offline RL method. Based on this insight, we derive a theory for OOD generalization, which characterizes several important design choices. We then propose a new offline GCRL method, Generalizable Offline goAl-condiTioned RL (GOAT), by combining the findings from our theoretical and empirical studies. On a new benchmark containing 9 independent identically distributed (IID) tasks and 17 OOD tasks, GOAT outperforms current state-of-the-art methods by a large margin.
APA
Yang, R., Yong, L., Ma, X., Hu, H., Zhang, C. & Zhang, T.. (2023). What is Essential for Unseen Goal Generalization of Offline Goal-conditioned RL?. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:39543-39571 Available from https://proceedings.mlr.press/v202/yang23q.html.

Related Material