A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation

Scott Fujimoto, David Meger, Doina Precup
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:3518-3529, 2021.

Abstract

Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution, is a promising approach for off-policy evaluation. However, current state-of-the-art MIS methods rely on complex optimization tricks and succeed mostly on simple toy problems. We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. The successor representation can be trained through deep reinforcement learning methodology and decouples the reward optimization from the dynamics of the environment, making the resulting algorithm stable and applicable to high-dimensional domains. We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-fujimoto21a, title = {A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation}, author = {Fujimoto, Scott and Meger, David and Precup, Doina}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {3518--3529}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/fujimoto21a/fujimoto21a.pdf}, url = {https://proceedings.mlr.press/v139/fujimoto21a.html}, abstract = {Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution, is a promising approach for off-policy evaluation. However, current state-of-the-art MIS methods rely on complex optimization tricks and succeed mostly on simple toy problems. We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. The successor representation can be trained through deep reinforcement learning methodology and decouples the reward optimization from the dynamics of the environment, making the resulting algorithm stable and applicable to high-dimensional domains. We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.} }
Endnote
%0 Conference Paper %T A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation %A Scott Fujimoto %A David Meger %A Doina Precup %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-fujimoto21a %I PMLR %P 3518--3529 %U https://proceedings.mlr.press/v139/fujimoto21a.html %V 139 %X Marginalized importance sampling (MIS), which measures the density ratio between the state-action occupancy of a target policy and that of a sampling distribution, is a promising approach for off-policy evaluation. However, current state-of-the-art MIS methods rely on complex optimization tricks and succeed mostly on simple toy problems. We bridge the gap between MIS and deep reinforcement learning by observing that the density ratio can be computed from the successor representation of the target policy. The successor representation can be trained through deep reinforcement learning methodology and decouples the reward optimization from the dynamics of the environment, making the resulting algorithm stable and applicable to high-dimensional domains. We evaluate the empirical performance of our approach on a variety of challenging Atari and MuJoCo environments.
APA
Fujimoto, S., Meger, D. & Precup, D.. (2021). A Deep Reinforcement Learning Approach to Marginalized Importance Sampling with the Successor Representation. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:3518-3529 Available from https://proceedings.mlr.press/v139/fujimoto21a.html.

Related Material