DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries

Yue Wang, Vitor Campagnolo Guizilini, Tianyuan Zhang, Yilun Wang, Hang Zhao, Justin Solomon
Proceedings of the 5th Conference on Robot Learning, PMLR 164:180-191, 2022.

Abstract

We introduce a framework for multi-camera 3D object detection. In contrast to existing works, which estimate 3D bounding boxes directly from monocular images or use depth prediction networks to generate input for 3D object detection from 2D information, our method manipulates predictions directly in 3D space. Our architecture extracts 2D features from multiple camera images and then uses a sparse set of 3D object queries to index into these 2D features, linking 3D positions to multi-view images using camera transformation matrices. Finally, our model makes a bounding box prediction per object query, using a set-to-set loss to measure the discrepancy between the ground-truth and the prediction. This top-down approach outperforms its bottom-up counterpart in which object bounding box prediction follows per-pixel depth estimation, since it does not suffer from the compounding error introduced by a depth prediction model. Moreover, our method does not require post-processing such as non-maximum suppression, dramatically improving inference speed. We achieve state-of-the-art performance on the nuScenes autonomous driving benchmark.

Cite this Paper


BibTeX
@InProceedings{pmlr-v164-wang22b, title = {DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries}, author = {Wang, Yue and Guizilini, Vitor Campagnolo and Zhang, Tianyuan and Wang, Yilun and Zhao, Hang and Solomon, Justin}, booktitle = {Proceedings of the 5th Conference on Robot Learning}, pages = {180--191}, year = {2022}, editor = {Faust, Aleksandra and Hsu, David and Neumann, Gerhard}, volume = {164}, series = {Proceedings of Machine Learning Research}, month = {08--11 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v164/wang22b/wang22b.pdf}, url = {https://proceedings.mlr.press/v164/wang22b.html}, abstract = {We introduce a framework for multi-camera 3D object detection. In contrast to existing works, which estimate 3D bounding boxes directly from monocular images or use depth prediction networks to generate input for 3D object detection from 2D information, our method manipulates predictions directly in 3D space. Our architecture extracts 2D features from multiple camera images and then uses a sparse set of 3D object queries to index into these 2D features, linking 3D positions to multi-view images using camera transformation matrices. Finally, our model makes a bounding box prediction per object query, using a set-to-set loss to measure the discrepancy between the ground-truth and the prediction. This top-down approach outperforms its bottom-up counterpart in which object bounding box prediction follows per-pixel depth estimation, since it does not suffer from the compounding error introduced by a depth prediction model. Moreover, our method does not require post-processing such as non-maximum suppression, dramatically improving inference speed. We achieve state-of-the-art performance on the nuScenes autonomous driving benchmark.} }
Endnote
%0 Conference Paper %T DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries %A Yue Wang %A Vitor Campagnolo Guizilini %A Tianyuan Zhang %A Yilun Wang %A Hang Zhao %A Justin Solomon %B Proceedings of the 5th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2022 %E Aleksandra Faust %E David Hsu %E Gerhard Neumann %F pmlr-v164-wang22b %I PMLR %P 180--191 %U https://proceedings.mlr.press/v164/wang22b.html %V 164 %X We introduce a framework for multi-camera 3D object detection. In contrast to existing works, which estimate 3D bounding boxes directly from monocular images or use depth prediction networks to generate input for 3D object detection from 2D information, our method manipulates predictions directly in 3D space. Our architecture extracts 2D features from multiple camera images and then uses a sparse set of 3D object queries to index into these 2D features, linking 3D positions to multi-view images using camera transformation matrices. Finally, our model makes a bounding box prediction per object query, using a set-to-set loss to measure the discrepancy between the ground-truth and the prediction. This top-down approach outperforms its bottom-up counterpart in which object bounding box prediction follows per-pixel depth estimation, since it does not suffer from the compounding error introduced by a depth prediction model. Moreover, our method does not require post-processing such as non-maximum suppression, dramatically improving inference speed. We achieve state-of-the-art performance on the nuScenes autonomous driving benchmark.
APA
Wang, Y., Guizilini, V.C., Zhang, T., Wang, Y., Zhao, H. & Solomon, J.. (2022). DETR3D: 3D Object Detection from Multi-view Images via 3D-to-2D Queries. Proceedings of the 5th Conference on Robot Learning, in Proceedings of Machine Learning Research 164:180-191 Available from https://proceedings.mlr.press/v164/wang22b.html.

Related Material