Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning

Tongzhou Wang, Antonio Torralba, Phillip Isola, Amy Zhang
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:36411-36430, 2023.

Abstract

In goal-reaching reinforcement learning (RL), the optimal value function has a particular geometry, called quasimetrics structure. This paper introduces Quasimetric Reinforcement Learning (QRL), a new RL method that utilizes quasimetric models to learn optimal value functions. Distinct from prior approaches, the QRL objective is specifically designed for quasimetrics, and provides strong theoretical recovery guarantees. Empirically, we conduct thorough analyses on a discretized MountainCar environment, identifying properties of QRL and its advantages over alternatives. On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance, across both state-based and image-based observations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-wang23al, title = {Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning}, author = {Wang, Tongzhou and Torralba, Antonio and Isola, Phillip and Zhang, Amy}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {36411--36430}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/wang23al/wang23al.pdf}, url = {https://proceedings.mlr.press/v202/wang23al.html}, abstract = {In goal-reaching reinforcement learning (RL), the optimal value function has a particular geometry, called quasimetrics structure. This paper introduces Quasimetric Reinforcement Learning (QRL), a new RL method that utilizes quasimetric models to learn optimal value functions. Distinct from prior approaches, the QRL objective is specifically designed for quasimetrics, and provides strong theoretical recovery guarantees. Empirically, we conduct thorough analyses on a discretized MountainCar environment, identifying properties of QRL and its advantages over alternatives. On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance, across both state-based and image-based observations.} }
Endnote
%0 Conference Paper %T Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning %A Tongzhou Wang %A Antonio Torralba %A Phillip Isola %A Amy Zhang %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-wang23al %I PMLR %P 36411--36430 %U https://proceedings.mlr.press/v202/wang23al.html %V 202 %X In goal-reaching reinforcement learning (RL), the optimal value function has a particular geometry, called quasimetrics structure. This paper introduces Quasimetric Reinforcement Learning (QRL), a new RL method that utilizes quasimetric models to learn optimal value functions. Distinct from prior approaches, the QRL objective is specifically designed for quasimetrics, and provides strong theoretical recovery guarantees. Empirically, we conduct thorough analyses on a discretized MountainCar environment, identifying properties of QRL and its advantages over alternatives. On offline and online goal-reaching benchmarks, QRL also demonstrates improved sample efficiency and performance, across both state-based and image-based observations.
APA
Wang, T., Torralba, A., Isola, P. & Zhang, A.. (2023). Optimal Goal-Reaching Reinforcement Learning via Quasimetric Learning. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:36411-36430 Available from https://proceedings.mlr.press/v202/wang23al.html.

Related Material