[edit]
Localized active learning of Gaussian process state space models
Proceedings of the 2nd Conference on Learning for Dynamics and Control, PMLR 120:490-499, 2020.
Abstract
In learning based methods for dynamical systems, exploration plays a crucial role, as accurate models of the dynamics need to be learned. Most of the tools developed so far focus on a proper exploration-exploitation trade-off to solve the given task, or actively strive for unknown areas of the task space. However, in the latter case, the exploration is performed greedily, and fails to capture the effect that learning in the near future will have on model uncertainty in the distant future, effectively steering the system towards exploratory trajectories that yield little information. In this paper, we provide an information theory-based model predictive control method that anticipates the learning effect when exploring dynamical systems, and steers the system towards the most informative points. We employ a Gaussian process to model the system dynamics, which enables us to quantify the model uncertainty and estimate future information gains. We include a numerical example illustrates that illustrates the effectiveness of the proposed approach.