SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks

Yan Xu, Zhaoyang Huang, Kwan-Yee Lin, Xinge Zhu, Jianping Shi, Hujun Bao, Guofeng Zhang, Hongsheng Li
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:115-125, 2021.

Abstract

Recent learning-based LiDAR odometry methods have demonstrated their competitiveness. However, most methods still face two substantial challenges: 1) the 2D projection representation of LiDAR data cannot effectively encode 3D structures from the point clouds; 2) the needs for a large amount of labeled data for training limit the application scope of these methods. In this paper, we propose an self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties. Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns. To suit our network to self-supervised learning, we design several novel loss functions that utilize the inherent properties of LiDAR point clouds. Moreover, an uncertainty-aware mechanism is incorporated in the loss functions to alleviate the interference of moving objects/noises. We evaluate our method’s performances on two large-scale datasets, ie, KITTI and Apollo-SouthBay.Our method outperforms state-of-the-art unsupervised methods by 27%-32% in terms of translational/rotational errors on the KITTI dataset and also performs well on the Apollo-SouthBay dataset. By including more unlabelled training data, our method can further improve performance comparable to the supervised methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-xu21a, title = {SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks}, author = {Xu, Yan and Huang, Zhaoyang and Lin, Kwan-Yee and Zhu, Xinge and Shi, Jianping and Bao, Hujun and Zhang, Guofeng and Li, Hongsheng}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {115--125}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/xu21a/xu21a.pdf}, url = {https://proceedings.mlr.press/v155/xu21a.html}, abstract = {Recent learning-based LiDAR odometry methods have demonstrated their competitiveness. However, most methods still face two substantial challenges: 1) the 2D projection representation of LiDAR data cannot effectively encode 3D structures from the point clouds; 2) the needs for a large amount of labeled data for training limit the application scope of these methods. In this paper, we propose an self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties. Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns. To suit our network to self-supervised learning, we design several novel loss functions that utilize the inherent properties of LiDAR point clouds. Moreover, an uncertainty-aware mechanism is incorporated in the loss functions to alleviate the interference of moving objects/noises. We evaluate our method’s performances on two large-scale datasets, ie, KITTI and Apollo-SouthBay.Our method outperforms state-of-the-art unsupervised methods by 27%-32% in terms of translational/rotational errors on the KITTI dataset and also performs well on the Apollo-SouthBay dataset. By including more unlabelled training data, our method can further improve performance comparable to the supervised methods.} }
Endnote
%0 Conference Paper %T SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks %A Yan Xu %A Zhaoyang Huang %A Kwan-Yee Lin %A Xinge Zhu %A Jianping Shi %A Hujun Bao %A Guofeng Zhang %A Hongsheng Li %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-xu21a %I PMLR %P 115--125 %U https://proceedings.mlr.press/v155/xu21a.html %V 155 %X Recent learning-based LiDAR odometry methods have demonstrated their competitiveness. However, most methods still face two substantial challenges: 1) the 2D projection representation of LiDAR data cannot effectively encode 3D structures from the point clouds; 2) the needs for a large amount of labeled data for training limit the application scope of these methods. In this paper, we propose an self-supervised LiDAR odometry method, dubbed SelfVoxeLO, to tackle these two difficulties. Specifically, we propose a 3D convolution network to process the raw LiDAR data directly, which extracts features that better encode the 3D geometric patterns. To suit our network to self-supervised learning, we design several novel loss functions that utilize the inherent properties of LiDAR point clouds. Moreover, an uncertainty-aware mechanism is incorporated in the loss functions to alleviate the interference of moving objects/noises. We evaluate our method’s performances on two large-scale datasets, ie, KITTI and Apollo-SouthBay.Our method outperforms state-of-the-art unsupervised methods by 27%-32% in terms of translational/rotational errors on the KITTI dataset and also performs well on the Apollo-SouthBay dataset. By including more unlabelled training data, our method can further improve performance comparable to the supervised methods.
APA
Xu, Y., Huang, Z., Lin, K., Zhu, X., Shi, J., Bao, H., Zhang, G. & Li, H.. (2021). SelfVoxeLO: Self-supervised LiDAR Odometry with Voxel-based Deep Neural Networks. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:115-125 Available from https://proceedings.mlr.press/v155/xu21a.html.

Related Material