Semantic Terrain Classification for Off-Road Autonomous Driving

Amirreza Shaban, Xiangyun Meng, JoonHo Lee, Byron Boots, Dieter Fox
Proceedings of the 5th Conference on Robot Learning, PMLR 164:619-629, 2022.

Abstract

Producing dense and accurate traversability maps is crucial for autonomous off-road navigation. In this paper, we focus on the problem of classifying terrains into 4 cost classes (free, low-cost, medium-cost, obstacle) for traversability assessment. This requires a robot to reason about both semantics (what objects are present?) and geometric properties (where are the objects located?) of the environment. To achieve this goal, we develop a novel Bird’s Eye View Network (BEVNet), a deep neural network that directly predicts a local map encoding terrain classes from sparse LiDAR inputs. BEVNet processes both geometric and semantic information in a temporally consistent fashion. More importantly, it uses learned prior and history to predict terrain classes in unseen space and into the future, allowing a robot to better appraise its situation. We quantitatively evaluate BEVNet on both on-road and off-road scenarios and show that it outperforms a variety of strong baselines.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-shaban22a, title = {Semantic Terrain Classification for Off-Road Autonomous Driving}, author = {Shaban, Amirreza and Meng, Xiangyun and Lee, JoonHo and Boots, Byron and Fox, Dieter}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {619--629}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/shaban22a/shaban22a.pdf}, url = {https://proceedings.mlr.press/v164/shaban22a.html}, abstract = {Producing dense and accurate traversability maps is crucial for autonomous off-road navigation. In this paper, we focus on the problem of classifying terrains into 4 cost classes (free, low-cost, medium-cost, obstacle) for traversability assessment. This requires a robot to reason about both semantics (what objects are present?) and geometric properties (where are the objects located?) of the environment. To achieve this goal, we develop a novel Bird’s Eye View Network (BEVNet), a deep neural network that directly predicts a local map encoding terrain classes from sparse LiDAR inputs. BEVNet processes both geometric and semantic information in a temporally consistent fashion. More importantly, it uses learned prior and history to predict terrain classes in unseen space and into the future, allowing a robot to better appraise its situation. We quantitatively evaluate BEVNet on both on-road and off-road scenarios and show that it outperforms a variety of strong baselines.} }
Endnote
%0 Conference Paper %T Semantic Terrain Classification for Off-Road Autonomous Driving %A Amirreza Shaban %A Xiangyun Meng %A JoonHo Lee %A Byron Boots %A Dieter Fox %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-shaban22a %I PMLR %P 619--629 %U https://proceedings.mlr.press/v164/shaban22a.html %V 164 %X Producing dense and accurate traversability maps is crucial for autonomous off-road navigation. In this paper, we focus on the problem of classifying terrains into 4 cost classes (free, low-cost, medium-cost, obstacle) for traversability assessment. This requires a robot to reason about both semantics (what objects are present?) and geometric properties (where are the objects located?) of the environment. To achieve this goal, we develop a novel Bird’s Eye View Network (BEVNet), a deep neural network that directly predicts a local map encoding terrain classes from sparse LiDAR inputs. BEVNet processes both geometric and semantic information in a temporally consistent fashion. More importantly, it uses learned prior and history to predict terrain classes in unseen space and into the future, allowing a robot to better appraise its situation. We quantitatively evaluate BEVNet on both on-road and off-road scenarios and show that it outperforms a variety of strong baselines.
APA
Shaban, A., Meng, X., Lee, J., Boots, B. & Fox, D.. (2022). Semantic Terrain Classification for Off-Road Autonomous Driving. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:619-629 Available from https://proceedings.mlr.press/v164/shaban22a.html.

Related Material