Progressive Multi-Modal Fusion for Robust 3D Object Detection

Rohit Mohan, Daniele Cattaneo, Florian Drews, Abhinav Valada
Proceedings of The 8th Conference on Robot Learning, PMLR 270:3285-3303, 2025.

Abstract

Multi-sensor fusion is crucial for accurate 3D object detection in autonomous driving, with cameras and LiDAR being the most commonly used sensors. However, existing methods perform sensor fusion in a single view by projecting features from both modalities either in Bird’s Eye View (BEV) or Perspective View (PV), thus sacrificing complementary information such as height or geometric proportions. To address this limitation, we propose ProFusion3D, a progressive fusion framework that combines features in both BEV and PV at both intermediate and object query levels. Our architecture hierarchically fuses local and global features, enhancing the robustness of 3D object detection. Additionally, we introduce a self-supervised mask modeling pre-training strategy to improve multi-modal representation learning and data efficiency through three novel objectives. Extensive experiments on nuScenes and Argoverse2 datasets conclusively demonstrate the efficacy of ProFusion3D. Moreover, ProFusion3D is robust to sensor failure, showing strong performance when only one modality is available.

Cite this Paper


BibTeX
@InProceedings{pmlr-v270-mohan25a, title = {Progressive Multi-Modal Fusion for Robust 3D Object Detection}, author = {Mohan, Rohit and Cattaneo, Daniele and Drews, Florian and Valada, Abhinav}, booktitle = {Proceedings of The 8th Conference on Robot Learning}, pages = {3285--3303}, year = {2025}, editor = {Agrawal, Pulkit and Kroemer, Oliver and Burgard, Wolfram}, volume = {270}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v270/main/assets/mohan25a/mohan25a.pdf}, url = {https://proceedings.mlr.press/v270/mohan25a.html}, abstract = {Multi-sensor fusion is crucial for accurate 3D object detection in autonomous driving, with cameras and LiDAR being the most commonly used sensors. However, existing methods perform sensor fusion in a single view by projecting features from both modalities either in Bird’s Eye View (BEV) or Perspective View (PV), thus sacrificing complementary information such as height or geometric proportions. To address this limitation, we propose ProFusion3D, a progressive fusion framework that combines features in both BEV and PV at both intermediate and object query levels. Our architecture hierarchically fuses local and global features, enhancing the robustness of 3D object detection. Additionally, we introduce a self-supervised mask modeling pre-training strategy to improve multi-modal representation learning and data efficiency through three novel objectives. Extensive experiments on nuScenes and Argoverse2 datasets conclusively demonstrate the efficacy of ProFusion3D. Moreover, ProFusion3D is robust to sensor failure, showing strong performance when only one modality is available.} }
Endnote
%0 Conference Paper %T Progressive Multi-Modal Fusion for Robust 3D Object Detection %A Rohit Mohan %A Daniele Cattaneo %A Florian Drews %A Abhinav Valada %B Proceedings of The 8th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Pulkit Agrawal %E Oliver Kroemer %E Wolfram Burgard %F pmlr-v270-mohan25a %I PMLR %P 3285--3303 %U https://proceedings.mlr.press/v270/mohan25a.html %V 270 %X Multi-sensor fusion is crucial for accurate 3D object detection in autonomous driving, with cameras and LiDAR being the most commonly used sensors. However, existing methods perform sensor fusion in a single view by projecting features from both modalities either in Bird’s Eye View (BEV) or Perspective View (PV), thus sacrificing complementary information such as height or geometric proportions. To address this limitation, we propose ProFusion3D, a progressive fusion framework that combines features in both BEV and PV at both intermediate and object query levels. Our architecture hierarchically fuses local and global features, enhancing the robustness of 3D object detection. Additionally, we introduce a self-supervised mask modeling pre-training strategy to improve multi-modal representation learning and data efficiency through three novel objectives. Extensive experiments on nuScenes and Argoverse2 datasets conclusively demonstrate the efficacy of ProFusion3D. Moreover, ProFusion3D is robust to sensor failure, showing strong performance when only one modality is available.
APA
Mohan, R., Cattaneo, D., Drews, F. & Valada, A.. (2025). Progressive Multi-Modal Fusion for Robust 3D Object Detection. Proceedings of The 8th Conference on Robot Learning, in Proceedings of Machine Learning Research 270:3285-3303 Available from https://proceedings.mlr.press/v270/mohan25a.html.

Related Material