TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories

Jannik Zürn, Sebastian Weber, Wolfram Burgard
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1104-1113, 2023.

Abstract

Robustly classifying ground infrastructure such as roads and street crossings is an essential task for mobile robots operating alongside pedestrians. While many semantic segmentation datasets are available for autonomous vehicles, models trained on such datasets exhibit a large domain gap when deployed on robots operating in pedestrian spaces. Manually annotating images recorded from pedestrian viewpoints is both expensive and time-consuming. To overcome this challenge, we propose \textit{TrackletMapper}, a framework for annotating ground surface types such as sidewalks, roads, and street crossings from object tracklets without requiring human-annotated data. To this end, we project the robot ego-trajectory and the paths of other traffic participants into the ego-view camera images, creating sparse semantic annotations for multiple types of ground surfaces from which a ground segmentation model can be trained. We further show that the model can be self-distilled for additional performance benefits by aggregating a ground surface map and projecting it into the camera images, creating a denser set of training annotations compared to the sparse tracklet annotations. We qualitatively and quantitatively attest our findings on a novel large-scale dataset for mobile robots operating in pedestrian areas. Code and dataset will be made available upon acceptance of the manuscript.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-zurn23a, title = {TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories}, author = {Z\"urn, Jannik and Weber, Sebastian and Burgard, Wolfram}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1104--1113}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/zurn23a/zurn23a.pdf}, url = {https://proceedings.mlr.press/v205/zurn23a.html}, abstract = {Robustly classifying ground infrastructure such as roads and street crossings is an essential task for mobile robots operating alongside pedestrians. While many semantic segmentation datasets are available for autonomous vehicles, models trained on such datasets exhibit a large domain gap when deployed on robots operating in pedestrian spaces. Manually annotating images recorded from pedestrian viewpoints is both expensive and time-consuming. To overcome this challenge, we propose \textit{TrackletMapper}, a framework for annotating ground surface types such as sidewalks, roads, and street crossings from object tracklets without requiring human-annotated data. To this end, we project the robot ego-trajectory and the paths of other traffic participants into the ego-view camera images, creating sparse semantic annotations for multiple types of ground surfaces from which a ground segmentation model can be trained. We further show that the model can be self-distilled for additional performance benefits by aggregating a ground surface map and projecting it into the camera images, creating a denser set of training annotations compared to the sparse tracklet annotations. We qualitatively and quantitatively attest our findings on a novel large-scale dataset for mobile robots operating in pedestrian areas. Code and dataset will be made available upon acceptance of the manuscript.} }
Endnote
%0 Conference Paper %T TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories %A Jannik Zürn %A Sebastian Weber %A Wolfram Burgard %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-zurn23a %I PMLR %P 1104--1113 %U https://proceedings.mlr.press/v205/zurn23a.html %V 205 %X Robustly classifying ground infrastructure such as roads and street crossings is an essential task for mobile robots operating alongside pedestrians. While many semantic segmentation datasets are available for autonomous vehicles, models trained on such datasets exhibit a large domain gap when deployed on robots operating in pedestrian spaces. Manually annotating images recorded from pedestrian viewpoints is both expensive and time-consuming. To overcome this challenge, we propose \textit{TrackletMapper}, a framework for annotating ground surface types such as sidewalks, roads, and street crossings from object tracklets without requiring human-annotated data. To this end, we project the robot ego-trajectory and the paths of other traffic participants into the ego-view camera images, creating sparse semantic annotations for multiple types of ground surfaces from which a ground segmentation model can be trained. We further show that the model can be self-distilled for additional performance benefits by aggregating a ground surface map and projecting it into the camera images, creating a denser set of training annotations compared to the sparse tracklet annotations. We qualitatively and quantitatively attest our findings on a novel large-scale dataset for mobile robots operating in pedestrian areas. Code and dataset will be made available upon acceptance of the manuscript.
APA
Zürn, J., Weber, S. & Burgard, W.. (2023). TrackletMapper: Ground Surface Segmentation and Mapping from Traffic Participant Trajectories. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1104-1113 Available from https://proceedings.mlr.press/v205/zurn23a.html.

Related Material