Offline Reinforcement Learning with Pseudometric Learning

Robert Dadashi, Shideh Rezaeifar, Nino Vieillard, Léonard Hussenot, Olivier Pietquin, Matthieu Geist
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:2307-2318, 2021.

Abstract

Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-dadashi21a, title = {Offline Reinforcement Learning with Pseudometric Learning}, author = {Dadashi, Robert and Rezaeifar, Shideh and Vieillard, Nino and Hussenot, L{\'e}onard and Pietquin, Olivier and Geist, Matthieu}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {2307--2318}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/dadashi21a/dadashi21a.pdf}, url = {https://proceedings.mlr.press/v139/dadashi21a.html}, abstract = {Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.} }
Endnote
%0 Conference Paper %T Offline Reinforcement Learning with Pseudometric Learning %A Robert Dadashi %A Shideh Rezaeifar %A Nino Vieillard %A Léonard Hussenot %A Olivier Pietquin %A Matthieu Geist %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-dadashi21a %I PMLR %P 2307--2318 %U https://proceedings.mlr.press/v139/dadashi21a.html %V 139 %X Offline Reinforcement Learning methods seek to learn a policy from logged transitions of an environment, without any interaction. In the presence of function approximation, and under the assumption of limited coverage of the state-action space of the environment, it is necessary to enforce the policy to visit state-action pairs close to the support of logged transitions. In this work, we propose an iterative procedure to learn a pseudometric (closely related to bisimulation metrics) from logged transitions, and use it to define this notion of closeness. We show its convergence and extend it to the function approximation setting. We then use this pseudometric to define a new lookup based bonus in an actor-critic algorithm: PLOFF. This bonus encourages the actor to stay close, in terms of the defined pseudometric, to the support of logged transitions. Finally, we evaluate the method on hand manipulation and locomotion tasks.
APA
Dadashi, R., Rezaeifar, S., Vieillard, N., Hussenot, L., Pietquin, O. & Geist, M.. (2021). Offline Reinforcement Learning with Pseudometric Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:2307-2318 Available from https://proceedings.mlr.press/v139/dadashi21a.html.

Related Material