Marginalized Operators for Off-policy Reinforcement Learning

Yunhao Tang, Mark Rowland, Remi Munos, Michal Valko
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:655-679, 2022.

Abstract

In this work, we propose marginalized operators, a new class of off-policy evaluation operators for reinforcement learning. Marginalized operators strictly generalize generic multi-step operators, such as Retrace, as special cases. Marginalized operators also suggest a form of sample-based estimates with potential variance reduction, compared to sample-based estimates of the original multi-step operators. We show that the estimates for marginalized operators can be computed in a scalable way, which also generalizes prior results on marginalized importance sampling as special cases. Finally, we empirically demonstrate that marginalized operators provide performance gains to off-policy evaluation problems and downstream policy optimization algorithms.

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-tang22a, title = { Marginalized Operators for Off-policy Reinforcement Learning }, author = {Tang, Yunhao and Rowland, Mark and Munos, Remi and Valko, Michal}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {655--679}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/tang22a/tang22a.pdf}, url = {https://proceedings.mlr.press/v151/tang22a.html}, abstract = { In this work, we propose marginalized operators, a new class of off-policy evaluation operators for reinforcement learning. Marginalized operators strictly generalize generic multi-step operators, such as Retrace, as special cases. Marginalized operators also suggest a form of sample-based estimates with potential variance reduction, compared to sample-based estimates of the original multi-step operators. We show that the estimates for marginalized operators can be computed in a scalable way, which also generalizes prior results on marginalized importance sampling as special cases. Finally, we empirically demonstrate that marginalized operators provide performance gains to off-policy evaluation problems and downstream policy optimization algorithms. } }
Endnote
%0 Conference Paper %T Marginalized Operators for Off-policy Reinforcement Learning %A Yunhao Tang %A Mark Rowland %A Remi Munos %A Michal Valko %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-tang22a %I PMLR %P 655--679 %U https://proceedings.mlr.press/v151/tang22a.html %V 151 %X In this work, we propose marginalized operators, a new class of off-policy evaluation operators for reinforcement learning. Marginalized operators strictly generalize generic multi-step operators, such as Retrace, as special cases. Marginalized operators also suggest a form of sample-based estimates with potential variance reduction, compared to sample-based estimates of the original multi-step operators. We show that the estimates for marginalized operators can be computed in a scalable way, which also generalizes prior results on marginalized importance sampling as special cases. Finally, we empirically demonstrate that marginalized operators provide performance gains to off-policy evaluation problems and downstream policy optimization algorithms.
APA
Tang, Y., Rowland, M., Munos, R. & Valko, M.. (2022). Marginalized Operators for Off-policy Reinforcement Learning . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:655-679 Available from https://proceedings.mlr.press/v151/tang22a.html.

Related Material