On the Generalization Gap in Reparameterizable Reinforcement Learning

Huan Wang, Stephan Zheng, Caiming Xiong, Richard Socher
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:6648-6658, 2019.

Abstract

Understanding generalization in reinforcement learning (RL) is a significant challenge, as many common assumptions of traditional supervised learning theory do not apply. We focus on the special class of reparameterizable RL problems, where the trajectory distribution can be decomposed using the reparametrization trick. For this problem class, estimating the expected return is efficient and the trajectory can be computed deterministically given peripheral random variables, which enables us to study reparametrizable RL using supervised learning and transfer learning theory. Through these relationships, we derive guarantees on the gap between the expected and empirical return for both intrinsic and external errors, based on Rademacher complexity as well as the PAC-Bayes bound. Our bound suggests the generalization capability of reparameterizable RL is related to multiple factors including “smoothness” of the environment transition, reward and agent policy function class. We also empirically verify the relationship between the generalization gap and these factors through simulations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-wang19o, title = {On the Generalization Gap in Reparameterizable Reinforcement Learning}, author = {Wang, Huan and Zheng, Stephan and Xiong, Caiming and Socher, Richard}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {6648--6658}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/wang19o/wang19o.pdf}, url = {https://proceedings.mlr.press/v97/wang19o.html}, abstract = {Understanding generalization in reinforcement learning (RL) is a significant challenge, as many common assumptions of traditional supervised learning theory do not apply. We focus on the special class of reparameterizable RL problems, where the trajectory distribution can be decomposed using the reparametrization trick. For this problem class, estimating the expected return is efficient and the trajectory can be computed deterministically given peripheral random variables, which enables us to study reparametrizable RL using supervised learning and transfer learning theory. Through these relationships, we derive guarantees on the gap between the expected and empirical return for both intrinsic and external errors, based on Rademacher complexity as well as the PAC-Bayes bound. Our bound suggests the generalization capability of reparameterizable RL is related to multiple factors including “smoothness” of the environment transition, reward and agent policy function class. We also empirically verify the relationship between the generalization gap and these factors through simulations.} }
Endnote
%0 Conference Paper %T On the Generalization Gap in Reparameterizable Reinforcement Learning %A Huan Wang %A Stephan Zheng %A Caiming Xiong %A Richard Socher %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-wang19o %I PMLR %P 6648--6658 %U https://proceedings.mlr.press/v97/wang19o.html %V 97 %X Understanding generalization in reinforcement learning (RL) is a significant challenge, as many common assumptions of traditional supervised learning theory do not apply. We focus on the special class of reparameterizable RL problems, where the trajectory distribution can be decomposed using the reparametrization trick. For this problem class, estimating the expected return is efficient and the trajectory can be computed deterministically given peripheral random variables, which enables us to study reparametrizable RL using supervised learning and transfer learning theory. Through these relationships, we derive guarantees on the gap between the expected and empirical return for both intrinsic and external errors, based on Rademacher complexity as well as the PAC-Bayes bound. Our bound suggests the generalization capability of reparameterizable RL is related to multiple factors including “smoothness” of the environment transition, reward and agent policy function class. We also empirically verify the relationship between the generalization gap and these factors through simulations.
APA
Wang, H., Zheng, S., Xiong, C. & Socher, R.. (2019). On the Generalization Gap in Reparameterizable Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:6648-6658 Available from https://proceedings.mlr.press/v97/wang19o.html.

Related Material