Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments

John Martin, Brendan Englot
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:438-447, 2017.

Abstract

In this paper, we consider the problem of learning robot control policies in heteroscedastic environments, whose noise properties vary throughout a robot’s state and action space. We consider reinforcement learning algorithms that evaluate policies using learned models of the environment, and we extend this class of algorithms to capture heteroscedastic effects with two enchained Gaussian processes. We explore the capabilities and limitations of this approach, and demonstrate that it reduces model bias across a variety of simulated robotic systems.

Cite this Paper


BibTeX
@InProceedings{pmlr-v78-martin17a, title = {Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments}, author = {Martin, John and Englot, Brendan}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning}, pages = {438--447}, year = {2017}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v78/martin17a/martin17a.pdf}, url = {https://proceedings.mlr.press/v78/martin17a.html}, abstract = {In this paper, we consider the problem of learning robot control policies in heteroscedastic environments, whose noise properties vary throughout a robot’s state and action space. We consider reinforcement learning algorithms that evaluate policies using learned models of the environment, and we extend this class of algorithms to capture heteroscedastic effects with two enchained Gaussian processes. We explore the capabilities and limitations of this approach, and demonstrate that it reduces model bias across a variety of simulated robotic systems. } }
Endnote
%0 Conference Paper %T Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments %A John Martin %A Brendan Englot %B Proceedings of the 1st Annual Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2017 %E Sergey Levine %E Vincent Vanhoucke %E Ken Goldberg %F pmlr-v78-martin17a %I PMLR %P 438--447 %U https://proceedings.mlr.press/v78/martin17a.html %V 78 %X In this paper, we consider the problem of learning robot control policies in heteroscedastic environments, whose noise properties vary throughout a robot’s state and action space. We consider reinforcement learning algorithms that evaluate policies using learned models of the environment, and we extend this class of algorithms to capture heteroscedastic effects with two enchained Gaussian processes. We explore the capabilities and limitations of this approach, and demonstrate that it reduces model bias across a variety of simulated robotic systems.
APA
Martin, J. & Englot, B.. (2017). Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments. Proceedings of the 1st Annual Conference on Robot Learning, in Proceedings of Machine Learning Research 78:438-447 Available from https://proceedings.mlr.press/v78/martin17a.html.

Related Material