DORT: Modeling Dynamic Objects in Recurrent for Multi-Camera 3D Object Detection and Tracking

Qing LIAN, Tai Wang, Dahua Lin, Jiangmiao Pang
Proceedings of The 7th Conference on Robot Learning, PMLR 229:3749-3765, 2023.

Abstract

Recent multi-camera 3D object detectors usually leverage temporal information to construct multi-view stereo that alleviates the ill-posed depth estimation. However, they typically assume all the objects are static and directly aggregate features across frames. This work begins with a theoretical and empirical analysis to reveal that ignoring the motion of moving objects can result in serious localization bias. Therefore, we propose to model Dynamic Objects in RecurrenT (DORT) to tackle this problem. In contrast to previous global BirdEye-View (BEV) methods, DORT extracts object-wise local volumes for motion estimation that also alleviates the heavy computational burden. By iteratively refining the estimated object motion and location, the preceding features can be precisely aggregated to the current frame to mitigate the aforementioned adverse effects. The simple framework has two significant appealing properties. It is flexible and practical that can be plugged into most camera-based 3D object detectors. As there are predictions of object motion in the loop, it can easily track objects across frames according to their nearest center distances. Without bells and whistles, DORT outperforms all the previous methods on the nuScenes detection and tracking benchmarks with $62.8%$ NDS and $57.6%$ AMOTA, respectively. The source code will be available at https://github.com/OpenRobotLab/DORT.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-lian23a, title = {DORT: Modeling Dynamic Objects in Recurrent for Multi-Camera 3D Object Detection and Tracking}, author = {LIAN, Qing and Wang, Tai and Lin, Dahua and Pang, Jiangmiao}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {3749--3765}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/lian23a/lian23a.pdf}, url = {https://proceedings.mlr.press/v229/lian23a.html}, abstract = {Recent multi-camera 3D object detectors usually leverage temporal information to construct multi-view stereo that alleviates the ill-posed depth estimation. However, they typically assume all the objects are static and directly aggregate features across frames. This work begins with a theoretical and empirical analysis to reveal that ignoring the motion of moving objects can result in serious localization bias. Therefore, we propose to model Dynamic Objects in RecurrenT (DORT) to tackle this problem. In contrast to previous global BirdEye-View (BEV) methods, DORT extracts object-wise local volumes for motion estimation that also alleviates the heavy computational burden. By iteratively refining the estimated object motion and location, the preceding features can be precisely aggregated to the current frame to mitigate the aforementioned adverse effects. The simple framework has two significant appealing properties. It is flexible and practical that can be plugged into most camera-based 3D object detectors. As there are predictions of object motion in the loop, it can easily track objects across frames according to their nearest center distances. Without bells and whistles, DORT outperforms all the previous methods on the nuScenes detection and tracking benchmarks with $62.8%$ NDS and $57.6%$ AMOTA, respectively. The source code will be available at https://github.com/OpenRobotLab/DORT.} }
Endnote
%0 Conference Paper %T DORT: Modeling Dynamic Objects in Recurrent for Multi-Camera 3D Object Detection and Tracking %A Qing LIAN %A Tai Wang %A Dahua Lin %A Jiangmiao Pang %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-lian23a %I PMLR %P 3749--3765 %U https://proceedings.mlr.press/v229/lian23a.html %V 229 %X Recent multi-camera 3D object detectors usually leverage temporal information to construct multi-view stereo that alleviates the ill-posed depth estimation. However, they typically assume all the objects are static and directly aggregate features across frames. This work begins with a theoretical and empirical analysis to reveal that ignoring the motion of moving objects can result in serious localization bias. Therefore, we propose to model Dynamic Objects in RecurrenT (DORT) to tackle this problem. In contrast to previous global BirdEye-View (BEV) methods, DORT extracts object-wise local volumes for motion estimation that also alleviates the heavy computational burden. By iteratively refining the estimated object motion and location, the preceding features can be precisely aggregated to the current frame to mitigate the aforementioned adverse effects. The simple framework has two significant appealing properties. It is flexible and practical that can be plugged into most camera-based 3D object detectors. As there are predictions of object motion in the loop, it can easily track objects across frames according to their nearest center distances. Without bells and whistles, DORT outperforms all the previous methods on the nuScenes detection and tracking benchmarks with $62.8%$ NDS and $57.6%$ AMOTA, respectively. The source code will be available at https://github.com/OpenRobotLab/DORT.
APA
LIAN, Q., Wang, T., Lin, D. & Pang, J.. (2023). DORT: Modeling Dynamic Objects in Recurrent for Multi-Camera 3D Object Detection and Tracking. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:3749-3765 Available from https://proceedings.mlr.press/v229/lian23a.html.

Related Material