[edit]
Extending Model-based Policy Gradients for Robots in Heteroscedastic Environments
Proceedings of the 1st Annual Conference on Robot Learning, PMLR 78:438-447, 2017.
Abstract
In this paper, we consider the problem of learning robot control policies in heteroscedastic environments, whose noise properties vary throughout a robot’s state and action space. We consider reinforcement learning algorithms that evaluate policies using learned models of the environment, and we extend this class of algorithms to capture heteroscedastic effects with two enchained Gaussian processes. We explore the capabilities and limitations of this approach, and demonstrate that it reduces model bias across a variety of simulated robotic systems.