Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning

Sean J Wang, Samuel Triest, Wenshan Wang, Sebastian Scherer, Aaron Johnson
Proceedings of the 5th Conference on Robot Learning, PMLR 164:224-233, 2022.

Abstract

Autonomous navigation of wheeled robots in rough terrain environments has been a long standing challenge. In these environments, predicting the robot’s trajectory can be challenging due to the complexity of terrain interactions, as well as the divergent dynamics that cause model uncertainty to compound and propagate poorly. This inhibits the robot’s long horizon decision making capabilities and often lead to shortsighted navigation strategies. We propose a model-based reinforcement learning algorithm for rough terrain traversal that trains a probabilistic dynamics model to consider the propagating effects of uncertainty. During trajectory predictions, a trajectory tracking controller is considered to predict closed-loop trajectories. Our method further increases prediction accuracy and precision by using constrained optimization to find trajectories with low divergence. Using this method, wheeled robots can find non-myopic control strategies to reach destinations with higher probability of success. We show results on simulated and real world robots navigating through rough terrain environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-wang22c, title = {Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning}, author = {Wang, Sean J and Triest, Samuel and Wang, Wenshan and Scherer, Sebastian and Johnson, Aaron}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {224--233}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/wang22c/wang22c.pdf}, url = {https://proceedings.mlr.press/v164/wang22c.html}, abstract = {Autonomous navigation of wheeled robots in rough terrain environments has been a long standing challenge. In these environments, predicting the robot’s trajectory can be challenging due to the complexity of terrain interactions, as well as the divergent dynamics that cause model uncertainty to compound and propagate poorly. This inhibits the robot’s long horizon decision making capabilities and often lead to shortsighted navigation strategies. We propose a model-based reinforcement learning algorithm for rough terrain traversal that trains a probabilistic dynamics model to consider the propagating effects of uncertainty. During trajectory predictions, a trajectory tracking controller is considered to predict closed-loop trajectories. Our method further increases prediction accuracy and precision by using constrained optimization to find trajectories with low divergence. Using this method, wheeled robots can find non-myopic control strategies to reach destinations with higher probability of success. We show results on simulated and real world robots navigating through rough terrain environments.} }
Endnote
%0 Conference Paper %T Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning %A Sean J Wang %A Samuel Triest %A Wenshan Wang %A Sebastian Scherer %A Aaron Johnson %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-wang22c %I PMLR %P 224--233 %U https://proceedings.mlr.press/v164/wang22c.html %V 164 %X Autonomous navigation of wheeled robots in rough terrain environments has been a long standing challenge. In these environments, predicting the robot’s trajectory can be challenging due to the complexity of terrain interactions, as well as the divergent dynamics that cause model uncertainty to compound and propagate poorly. This inhibits the robot’s long horizon decision making capabilities and often lead to shortsighted navigation strategies. We propose a model-based reinforcement learning algorithm for rough terrain traversal that trains a probabilistic dynamics model to consider the propagating effects of uncertainty. During trajectory predictions, a trajectory tracking controller is considered to predict closed-loop trajectories. Our method further increases prediction accuracy and precision by using constrained optimization to find trajectories with low divergence. Using this method, wheeled robots can find non-myopic control strategies to reach destinations with higher probability of success. We show results on simulated and real world robots navigating through rough terrain environments.
APA
Wang, S.J., Triest, S., Wang, W., Scherer, S. & Johnson, A.. (2022). Rough Terrain Navigation Using Divergence Constrained Model-Based Reinforcement Learning. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:224-233 Available from https://proceedings.mlr.press/v164/wang22c.html.

Related Material