Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:438-447, 2017.
Abstract
In this paper, we consider the problem of learning robot control policies in heteroscedastic environments, whose noise properties vary throughout a robot’s state and action space. We consider reinforcement learning algorithms that evaluate policies using learned models of the environment, and we extend this class of algorithms to capture heteroscedastic effects with two enchained Gaussian processes. We explore the capabilities and limitations of this approach, and demonstrate that it reduces model bias across a variety of simulated robotic systems.
@InProceedings{pmlr-v78-martin17a,
title = {Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments},
author = {John Martin and Brendan Englot},
booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
pages = {438--447},
year = {2017},
editor = {Sergey Levine and Vincent Vanhoucke and Ken Goldberg},
volume = {78},
series = {Proceedings of Machine Learning Research},
address = {},
month = {13--15 Nov},
publisher = {PMLR},
pdf = {http://proceedings.mlr.press/v78/martin17a/martin17a.pdf},
url = {http://proceedings.mlr.press/v78/martin17a.html},
abstract = {In this paper, we consider the problem of learning robot control policies in heteroscedastic environments, whose noise properties vary throughout a robot’s state and action space. We consider reinforcement learning algorithms that evaluate policies using learned models of the environment, and we extend this class of algorithms to capture heteroscedastic effects with two enchained Gaussian processes. We explore the capabilities and limitations of this approach, and demonstrate that it reduces model bias across a variety of simulated robotic systems. }
}
%0 Conference Paper
%T Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments
%A John Martin
%A Brendan Englot
%B Proceedings of the 1st Annual Conference on Robot Learning
%C Proceedings of Machine Learning Research
%D 2017
%E Sergey Levine
%E Vincent Vanhoucke
%E Ken Goldberg
%F pmlr-v78-martin17a
%I PMLR
%J Proceedings of Machine Learning Research
%P 438--447
%U http://proceedings.mlr.press
%V 78
%W PMLR
%X In this paper, we consider the problem of learning robot control policies in heteroscedastic environments, whose noise properties vary throughout a robot’s state and action space. We consider reinforcement learning algorithms that evaluate policies using learned models of the environment, and we extend this class of algorithms to capture heteroscedastic effects with two enchained Gaussian processes. We explore the capabilities and limitations of this approach, and demonstrate that it reduces model bias across a variety of simulated robotic systems.
Martin, J. & Englot, B.. (2017). Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments. Proceedings of the 1st Annual Conference on Robot Learning, in PMLR 78:438-447
This site last compiled Fri, 22 Dec 2017 01:53:11 +0000