SE(3)-Equivariant Point Cloud-Based Place Recognition

Chien Erh Lin, Jingwei Song, Ray Zhang, Minghan Zhu, Maani Ghaffari
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1520-1530, 2023.

Abstract

This paper reports on a new 3D point cloud-based place recognition framework that uses SE(3)-equivariant networks to learn SE(3)-invariant global descriptors. We discover that, unlike existing methods, learned SE(3)-invariant global descriptors are more robust to matching inaccuracy and failure in severe rotation and translation configurations. Mobile robots undergo arbitrary rotational and translational movements. The SE(3)-invariant property ensures that the learned descriptors are robust to the rotation and translation changes in the robot pose and can represent the intrinsic geometric information of the scene. Furthermore, we have discovered that the attention module aids in the enhancement of performance while allowing significant downsampling. We evaluate the performance of the proposed framework on real-world data sets. The experimental results show that the proposed framework outperforms state-of-the-art baselines in various metrics, leading to a reliable point cloud-based place recognition network. We have open-sourced our code at: https://github.com/UMich-CURLY/se3_equivariant_place_recognition.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-lin23a, title = {SE(3)-Equivariant Point Cloud-Based Place Recognition}, author = {Lin, Chien Erh and Song, Jingwei and Zhang, Ray and Zhu, Minghan and Ghaffari, Maani}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1520--1530}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/lin23a/lin23a.pdf}, url = {https://proceedings.mlr.press/v205/lin23a.html}, abstract = {This paper reports on a new 3D point cloud-based place recognition framework that uses SE(3)-equivariant networks to learn SE(3)-invariant global descriptors. We discover that, unlike existing methods, learned SE(3)-invariant global descriptors are more robust to matching inaccuracy and failure in severe rotation and translation configurations. Mobile robots undergo arbitrary rotational and translational movements. The SE(3)-invariant property ensures that the learned descriptors are robust to the rotation and translation changes in the robot pose and can represent the intrinsic geometric information of the scene. Furthermore, we have discovered that the attention module aids in the enhancement of performance while allowing significant downsampling. We evaluate the performance of the proposed framework on real-world data sets. The experimental results show that the proposed framework outperforms state-of-the-art baselines in various metrics, leading to a reliable point cloud-based place recognition network. We have open-sourced our code at: https://github.com/UMich-CURLY/se3_equivariant_place_recognition.} }
Endnote
%0 Conference Paper %T SE(3)-Equivariant Point Cloud-Based Place Recognition %A Chien Erh Lin %A Jingwei Song %A Ray Zhang %A Minghan Zhu %A Maani Ghaffari %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-lin23a %I PMLR %P 1520--1530 %U https://proceedings.mlr.press/v205/lin23a.html %V 205 %X This paper reports on a new 3D point cloud-based place recognition framework that uses SE(3)-equivariant networks to learn SE(3)-invariant global descriptors. We discover that, unlike existing methods, learned SE(3)-invariant global descriptors are more robust to matching inaccuracy and failure in severe rotation and translation configurations. Mobile robots undergo arbitrary rotational and translational movements. The SE(3)-invariant property ensures that the learned descriptors are robust to the rotation and translation changes in the robot pose and can represent the intrinsic geometric information of the scene. Furthermore, we have discovered that the attention module aids in the enhancement of performance while allowing significant downsampling. We evaluate the performance of the proposed framework on real-world data sets. The experimental results show that the proposed framework outperforms state-of-the-art baselines in various metrics, leading to a reliable point cloud-based place recognition network. We have open-sourced our code at: https://github.com/UMich-CURLY/se3_equivariant_place_recognition.
APA
Lin, C.E., Song, J., Zhang, R., Zhu, M. & Ghaffari, M.. (2023). SE(3)-Equivariant Point Cloud-Based Place Recognition. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1520-1530 Available from https://proceedings.mlr.press/v205/lin23a.html.

Related Material