MoNet3D: Towards Accurate Monocular 3D Object Localization in Real Time

Xichuan Zhou, Yicong Peng, Chunqiao Long, Fengbo Ren, Cong Shi
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:11503-11512, 2020.

Abstract

Monocular multi-object detection and localization in 3D space has been proven to be a challenging task. The MoNet3D algorithm is a novel and effective framework that can predict the 3D position of each object in a monocular image, and draw a 3D bounding box on each object. The MoNet3D method incorporates the prior knowledge of spatial geometric correlation of neighboring objects into the deep neural network training process, in order to improve the accuracy of 3D object localization. Experiments over the KITTI data set show that the accuracy of predicting the depth and horizontal coordinate of the object in 3D space can reach 96.25% and 94.74%, respectively. Meanwhile, the method can realize the real-time image processing capability of 27.85 FPS. Our code is publicly available at https://github.com/CQUlearningsystemgroup/YicongPeng

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-zhou20b, title = {{M}o{N}et3{D}: Towards Accurate Monocular 3{D} Object Localization in Real Time}, author = {Zhou, Xichuan and Peng, Yicong and Long, Chunqiao and Ren, Fengbo and Shi, Cong}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {11503--11512}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/zhou20b/zhou20b.pdf}, url = {https://proceedings.mlr.press/v119/zhou20b.html}, abstract = {Monocular multi-object detection and localization in 3D space has been proven to be a challenging task. The MoNet3D algorithm is a novel and effective framework that can predict the 3D position of each object in a monocular image, and draw a 3D bounding box on each object. The MoNet3D method incorporates the prior knowledge of spatial geometric correlation of neighboring objects into the deep neural network training process, in order to improve the accuracy of 3D object localization. Experiments over the KITTI data set show that the accuracy of predicting the depth and horizontal coordinate of the object in 3D space can reach 96.25% and 94.74%, respectively. Meanwhile, the method can realize the real-time image processing capability of 27.85 FPS. Our code is publicly available at https://github.com/CQUlearningsystemgroup/YicongPeng} }
Endnote
%0 Conference Paper %T MoNet3D: Towards Accurate Monocular 3D Object Localization in Real Time %A Xichuan Zhou %A Yicong Peng %A Chunqiao Long %A Fengbo Ren %A Cong Shi %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-zhou20b %I PMLR %P 11503--11512 %U https://proceedings.mlr.press/v119/zhou20b.html %V 119 %X Monocular multi-object detection and localization in 3D space has been proven to be a challenging task. The MoNet3D algorithm is a novel and effective framework that can predict the 3D position of each object in a monocular image, and draw a 3D bounding box on each object. The MoNet3D method incorporates the prior knowledge of spatial geometric correlation of neighboring objects into the deep neural network training process, in order to improve the accuracy of 3D object localization. Experiments over the KITTI data set show that the accuracy of predicting the depth and horizontal coordinate of the object in 3D space can reach 96.25% and 94.74%, respectively. Meanwhile, the method can realize the real-time image processing capability of 27.85 FPS. Our code is publicly available at https://github.com/CQUlearningsystemgroup/YicongPeng
APA
Zhou, X., Peng, Y., Long, C., Ren, F. & Shi, C.. (2020). MoNet3D: Towards Accurate Monocular 3D Object Localization in Real Time. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:11503-11512 Available from https://proceedings.mlr.press/v119/zhou20b.html.

Related Material