Contrastive Value Learning: Implicit Models for Simple Offline RL

Bogdan Mazoure, Benjamin Eysenbach, Ofir Nachum, Jonathan Tompson
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1257-1267, 2023.

Abstract

Model-based reinforcement learning (RL) methods are appealing in the offline setting because they allow an agent to reason about the consequences of actions without interacting with the environment. While conventional model-based methods learn a 1-step model, predicting the immediate next state, these methods must be plugged into larger planning or RL systems to yield a policy. Can we model the environment dynamics in a different way, such that the learned model directly indicates the value of each action? In this paper, we propose Contrastive Value Learning (CVL), which learns an implicit, multi-step dynamics model. This model can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action, without requiring any TD learning. Because this model represents the multi-step transitions implicitly, it avoids having to predict high-dimensional observations and thus scales to high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior offline RL methods on complex robotics benchmarks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-mazoure23a, title = {Contrastive Value Learning: Implicit Models for Simple Offline RL}, author = {Mazoure, Bogdan and Eysenbach, Benjamin and Nachum, Ofir and Tompson, Jonathan}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1257--1267}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/mazoure23a/mazoure23a.pdf}, url = {https://proceedings.mlr.press/v229/mazoure23a.html}, abstract = {Model-based reinforcement learning (RL) methods are appealing in the offline setting because they allow an agent to reason about the consequences of actions without interacting with the environment. While conventional model-based methods learn a 1-step model, predicting the immediate next state, these methods must be plugged into larger planning or RL systems to yield a policy. Can we model the environment dynamics in a different way, such that the learned model directly indicates the value of each action? In this paper, we propose Contrastive Value Learning (CVL), which learns an implicit, multi-step dynamics model. This model can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action, without requiring any TD learning. Because this model represents the multi-step transitions implicitly, it avoids having to predict high-dimensional observations and thus scales to high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior offline RL methods on complex robotics benchmarks.} }
Endnote
%0 Conference Paper %T Contrastive Value Learning: Implicit Models for Simple Offline RL %A Bogdan Mazoure %A Benjamin Eysenbach %A Ofir Nachum %A Jonathan Tompson %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-mazoure23a %I PMLR %P 1257--1267 %U https://proceedings.mlr.press/v229/mazoure23a.html %V 229 %X Model-based reinforcement learning (RL) methods are appealing in the offline setting because they allow an agent to reason about the consequences of actions without interacting with the environment. While conventional model-based methods learn a 1-step model, predicting the immediate next state, these methods must be plugged into larger planning or RL systems to yield a policy. Can we model the environment dynamics in a different way, such that the learned model directly indicates the value of each action? In this paper, we propose Contrastive Value Learning (CVL), which learns an implicit, multi-step dynamics model. This model can be learned without access to reward functions, but nonetheless can be used to directly estimate the value of each action, without requiring any TD learning. Because this model represents the multi-step transitions implicitly, it avoids having to predict high-dimensional observations and thus scales to high-dimensional tasks. Our experiments demonstrate that CVL outperforms prior offline RL methods on complex robotics benchmarks.
APA
Mazoure, B., Eysenbach, B., Nachum, O. & Tompson, J.. (2023). Contrastive Value Learning: Implicit Models for Simple Offline RL. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1257-1267 Available from https://proceedings.mlr.press/v229/mazoure23a.html.

Related Material