On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning

Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:6-20, 2021.

Abstract

Model-based reinforcement learning approaches add explicit domain knowledge to agents in hopes of improving the sample-efficiency in comparison to model-free agents. However, in practice model-based methods are unable to achieve the same asymptotic performance on challenging continuous control tasks due to the complexity of learning and controlling an explicit world model. In this paper we investigate the stochastic value gradient (SVG),which is a well-known family of methods for controlling continuous systems which includes model-based approaches that distill a model-based value expansion into a model-free policy. We consider a variant of the model-based SVG that scales to larger systems and uses 1) an entropy regularization to help with exploration,2) a learned deterministic world model to improve the short-horizon value estimate, and 3) a learned model-free value estimate after the model’s rollout. This SVG variation captures the model-free soft actor-critic method as an instance when the model rollout horizon is zero,and otherwise uses short-horizon model rollouts to improve the value estimate for the policy update. We surpass the asymptotic performance of other model-based methods on the proprioceptive MuJoCo locomotion tasks from the OpenAI gym,including a humanoid. We notably achieve these results with a simple deterministic world model without requiring an ensemble.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-amos21a, title = {On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning learning}, author = {Amos, Brandon and Stanton, Samuel and Yarats, Denis and Wilson, Andrew Gordon}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {6--20}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/amos21a/amos21a.pdf}, url = {https://proceedings.mlr.press/v144/amos21a.html}, abstract = {Model-based reinforcement learning approaches add explicit domain knowledge to agents in hopes of improving the sample-efficiency in comparison to model-free agents. However, in practice model-based methods are unable to achieve the same asymptotic performance on challenging continuous control tasks due to the complexity of learning and controlling an explicit world model. In this paper we investigate the stochastic value gradient (SVG),which is a well-known family of methods for controlling continuous systems which includes model-based approaches that distill a model-based value expansion into a model-free policy. We consider a variant of the model-based SVG that scales to larger systems and uses 1) an entropy regularization to help with exploration,2) a learned deterministic world model to improve the short-horizon value estimate, and 3) a learned model-free value estimate after the model’s rollout. This SVG variation captures the model-free soft actor-critic method as an instance when the model rollout horizon is zero,and otherwise uses short-horizon model rollouts to improve the value estimate for the policy update. We surpass the asymptotic performance of other model-based methods on the proprioceptive MuJoCo locomotion tasks from the OpenAI gym,including a humanoid. We notably achieve these results with a simple deterministic world model without requiring an ensemble.} }
Endnote
%0 Conference Paper %T On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning %A Brandon Amos %A Samuel Stanton %A Denis Yarats %A Andrew Gordon Wilson %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-amos21a %I PMLR %P 6--20 %U https://proceedings.mlr.press/v144/amos21a.html %V 144 %X Model-based reinforcement learning approaches add explicit domain knowledge to agents in hopes of improving the sample-efficiency in comparison to model-free agents. However, in practice model-based methods are unable to achieve the same asymptotic performance on challenging continuous control tasks due to the complexity of learning and controlling an explicit world model. In this paper we investigate the stochastic value gradient (SVG),which is a well-known family of methods for controlling continuous systems which includes model-based approaches that distill a model-based value expansion into a model-free policy. We consider a variant of the model-based SVG that scales to larger systems and uses 1) an entropy regularization to help with exploration,2) a learned deterministic world model to improve the short-horizon value estimate, and 3) a learned model-free value estimate after the model’s rollout. This SVG variation captures the model-free soft actor-critic method as an instance when the model rollout horizon is zero,and otherwise uses short-horizon model rollouts to improve the value estimate for the policy update. We surpass the asymptotic performance of other model-based methods on the proprioceptive MuJoCo locomotion tasks from the OpenAI gym,including a humanoid. We notably achieve these results with a simple deterministic world model without requiring an ensemble.
APA
Amos, B., Stanton, S., Yarats, D. & Wilson, A.G.. (2021). On the Model-Based Stochastic Value Gradient for Continuous Reinforcement Learning. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:6-20 Available from https://proceedings.mlr.press/v144/amos21a.html.

Related Material