GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values

Shangtong Zhang, Bo Liu, Shimon Whiteson
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11194-11203, 2020.

Abstract

We present GradientDICE for estimating the density ratio between the state distribution of the target policy and the sampling distribution in off-policy reinforcement learning. GradientDICE fixes several problems of GenDICE (Zhang et al., 2020), the current state-of-the-art for estimating such density ratios. Namely, the optimization problem in GenDICE is not a convex-concave saddle-point problem once nonlinearity in optimization variable parameterization is introduced to ensure positivity, so primal-dual algorithms are not guaranteed to find the desired solution. However, such nonlinearity is essential to ensure the consistency of GenDICE even with a tabular representation. This is a fundamental contradiction, resulting from GenDICE’s original formulation of the optimization problem. In GradientDICE, we optimize a different objective from GenDICE by using the Perron-Frobenius theorem and eliminating GenDICE’s use of divergence, such that nonlinearity in parameterization is not necessary for GradientDICE, which is provably convergent under linear function approximation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhang20r, title = {{G}radient{DICE}: Rethinking Generalized Offline Estimation of Stationary Values}, author = {Zhang, Shangtong and Liu, Bo and Whiteson, Shimon}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11194--11203}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhang20r/zhang20r.pdf}, url = {https://proceedings.mlr.press/v119/zhang20r.html}, abstract = {We present GradientDICE for estimating the density ratio between the state distribution of the target policy and the sampling distribution in off-policy reinforcement learning. GradientDICE fixes several problems of GenDICE (Zhang et al., 2020), the current state-of-the-art for estimating such density ratios. Namely, the optimization problem in GenDICE is not a convex-concave saddle-point problem once nonlinearity in optimization variable parameterization is introduced to ensure positivity, so primal-dual algorithms are not guaranteed to find the desired solution. However, such nonlinearity is essential to ensure the consistency of GenDICE even with a tabular representation. This is a fundamental contradiction, resulting from GenDICE’s original formulation of the optimization problem. In GradientDICE, we optimize a different objective from GenDICE by using the Perron-Frobenius theorem and eliminating GenDICE’s use of divergence, such that nonlinearity in parameterization is not necessary for GradientDICE, which is provably convergent under linear function approximation.} }
Endnote
%0 Conference Paper %T GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values %A Shangtong Zhang %A Bo Liu %A Shimon Whiteson %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhang20r %I PMLR %P 11194--11203 %U https://proceedings.mlr.press/v119/zhang20r.html %V 119 %X We present GradientDICE for estimating the density ratio between the state distribution of the target policy and the sampling distribution in off-policy reinforcement learning. GradientDICE fixes several problems of GenDICE (Zhang et al., 2020), the current state-of-the-art for estimating such density ratios. Namely, the optimization problem in GenDICE is not a convex-concave saddle-point problem once nonlinearity in optimization variable parameterization is introduced to ensure positivity, so primal-dual algorithms are not guaranteed to find the desired solution. However, such nonlinearity is essential to ensure the consistency of GenDICE even with a tabular representation. This is a fundamental contradiction, resulting from GenDICE’s original formulation of the optimization problem. In GradientDICE, we optimize a different objective from GenDICE by using the Perron-Frobenius theorem and eliminating GenDICE’s use of divergence, such that nonlinearity in parameterization is not necessary for GradientDICE, which is provably convergent under linear function approximation.
APA
Zhang, S., Liu, B. & Whiteson, S.. (2020). GradientDICE: Rethinking Generalized Offline Estimation of Stationary Values. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11194-11203 Available from https://proceedings.mlr.press/v119/zhang20r.html.

Related Material