4D-Former: Multimodal 4D Panoptic Segmentation

Ali Athar, Enxu Li, Sergio Casas, Raquel Urtasun
Proceedings of The 7th Conference on Robot Learning, PMLR 229:2151-2164, 2023.

Abstract

4D panoptic segmentation is a challenging but practically useful task that requires every point in a LiDAR point-cloud sequence to be assigned a semantic class label, and individual objects to be segmented and tracked over time. Existing approaches utilize only LiDAR inputs which convey limited information in regions with point sparsity. This problem can, however, be mitigated by utilizing RGB camera images which offer appearance-based information that can reinforce the geometry-based LiDAR features. Motivated by this, we propose 4D-Former: a novel method for 4D panoptic segmentation which leverages both LiDAR and image modalities, and predicts semantic masks as well as temporally consistent object masks for the input point-cloud sequence. We encode semantic classes and objects using a set of concise queries which absorb feature information from both data modalities. Additionally, we propose a learned mechanism to associate object tracks over time which reasons over both appearance and spatial location. We apply 4D-Former to the nuScenes and SemanticKITTI datasets where it achieves state-of-the-art results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-athar23a, title = {4D-Former: Multimodal 4D Panoptic Segmentation}, author = {Athar, Ali and Li, Enxu and Casas, Sergio and Urtasun, Raquel}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {2151--2164}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/athar23a/athar23a.pdf}, url = {https://proceedings.mlr.press/v229/athar23a.html}, abstract = {4D panoptic segmentation is a challenging but practically useful task that requires every point in a LiDAR point-cloud sequence to be assigned a semantic class label, and individual objects to be segmented and tracked over time. Existing approaches utilize only LiDAR inputs which convey limited information in regions with point sparsity. This problem can, however, be mitigated by utilizing RGB camera images which offer appearance-based information that can reinforce the geometry-based LiDAR features. Motivated by this, we propose 4D-Former: a novel method for 4D panoptic segmentation which leverages both LiDAR and image modalities, and predicts semantic masks as well as temporally consistent object masks for the input point-cloud sequence. We encode semantic classes and objects using a set of concise queries which absorb feature information from both data modalities. Additionally, we propose a learned mechanism to associate object tracks over time which reasons over both appearance and spatial location. We apply 4D-Former to the nuScenes and SemanticKITTI datasets where it achieves state-of-the-art results.} }
Endnote
%0 Conference Paper %T 4D-Former: Multimodal 4D Panoptic Segmentation %A Ali Athar %A Enxu Li %A Sergio Casas %A Raquel Urtasun %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-athar23a %I PMLR %P 2151--2164 %U https://proceedings.mlr.press/v229/athar23a.html %V 229 %X 4D panoptic segmentation is a challenging but practically useful task that requires every point in a LiDAR point-cloud sequence to be assigned a semantic class label, and individual objects to be segmented and tracked over time. Existing approaches utilize only LiDAR inputs which convey limited information in regions with point sparsity. This problem can, however, be mitigated by utilizing RGB camera images which offer appearance-based information that can reinforce the geometry-based LiDAR features. Motivated by this, we propose 4D-Former: a novel method for 4D panoptic segmentation which leverages both LiDAR and image modalities, and predicts semantic masks as well as temporally consistent object masks for the input point-cloud sequence. We encode semantic classes and objects using a set of concise queries which absorb feature information from both data modalities. Additionally, we propose a learned mechanism to associate object tracks over time which reasons over both appearance and spatial location. We apply 4D-Former to the nuScenes and SemanticKITTI datasets where it achieves state-of-the-art results.
APA
Athar, A., Li, E., Casas, S. & Urtasun, R.. (2023). 4D-Former: Multimodal 4D Panoptic Segmentation. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:2151-2164 Available from https://proceedings.mlr.press/v229/athar23a.html.

Related Material