[edit]
Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning
Proceedings of the 5th Conference on Robot Learning, PMLR 164:224-233, 2022.
Abstract
Autonomous navigation of wheeled robots in rough terrain environments has been a long standing challenge. In these environments, predicting the robot’s trajectory can be challenging due to the complexity of terrain interactions, as well as the divergent dynamics that cause model uncertainty to compound and propagate poorly. This inhibits the robot’s long horizon decision making capabilities and often lead to shortsighted navigation strategies. We propose a model-based reinforcement learning algorithm for rough terrain traversal that trains a probabilistic dynamics model to consider the propagating effects of uncertainty. During trajectory predictions, a trajectory tracking controller is considered to predict closed-loop trajectories. Our method further increases prediction accuracy and precision by using constrained optimization to find trajectories with low divergence. Using this method, wheeled robots can find non-myopic control strategies to reach destinations with higher probability of success. We show results on simulated and real world robots navigating through rough terrain environments.