Off-Policy Policy Gradient with Stationary Distribution Correction

Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:1180-1190, 2020.

Abstract

We study the problem of off-policy policy optimization in Markov decision processes, and develop a novel off-policy policy gradient method. Prior off-policy policy gradient approaches have generally ignored the mismatch between the distribution of states visited under the behavior policy used to collect data, and what would be the distribution of states under the learned policy. Here we build on recent progress for estimating the ratio of the state distributions under behavior and evaluation policies for policy evaluation, and present an off-policy policy gradient optimization technique that can account for this mismatch in distributions. We present an illustrative example of why this is important and a theoretical convergence guarantee for our approach. Empirically, we compare our method in simulations to several strong baselines which do not correct for this mismatch, significantly improving in the quality of the policy discovered.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-liu20a, title = {Off-Policy Policy Gradient with Stationary Distribution Correction}, author = {Liu, Yao and Swaminathan, Adith and Agarwal, Alekh and Brunskill, Emma}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {1180--1190}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/liu20a/liu20a.pdf}, url = {https://proceedings.mlr.press/v115/liu20a.html}, abstract = {We study the problem of off-policy policy optimization in Markov decision processes, and develop a novel off-policy policy gradient method. Prior off-policy policy gradient approaches have generally ignored the mismatch between the distribution of states visited under the behavior policy used to collect data, and what would be the distribution of states under the learned policy. Here we build on recent progress for estimating the ratio of the state distributions under behavior and evaluation policies for policy evaluation, and present an off-policy policy gradient optimization technique that can account for this mismatch in distributions. We present an illustrative example of why this is important and a theoretical convergence guarantee for our approach. Empirically, we compare our method in simulations to several strong baselines which do not correct for this mismatch, significantly improving in the quality of the policy discovered.} }
Endnote
%0 Conference Paper %T Off-Policy Policy Gradient with Stationary Distribution Correction %A Yao Liu %A Adith Swaminathan %A Alekh Agarwal %A Emma Brunskill %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-liu20a %I PMLR %P 1180--1190 %U https://proceedings.mlr.press/v115/liu20a.html %V 115 %X We study the problem of off-policy policy optimization in Markov decision processes, and develop a novel off-policy policy gradient method. Prior off-policy policy gradient approaches have generally ignored the mismatch between the distribution of states visited under the behavior policy used to collect data, and what would be the distribution of states under the learned policy. Here we build on recent progress for estimating the ratio of the state distributions under behavior and evaluation policies for policy evaluation, and present an off-policy policy gradient optimization technique that can account for this mismatch in distributions. We present an illustrative example of why this is important and a theoretical convergence guarantee for our approach. Empirically, we compare our method in simulations to several strong baselines which do not correct for this mismatch, significantly improving in the quality of the policy discovered.
APA
Liu, Y., Swaminathan, A., Agarwal, A. & Brunskill, E.. (2020). Off-Policy Policy Gradient with Stationary Distribution Correction. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:1180-1190 Available from https://proceedings.mlr.press/v115/liu20a.html.

Related Material