Batch Differentiable Pose Refinement for In-The-Wild Camera/LiDAR Extrinsic Calibration

Lanke Frank Tarimo Fu, Maurice Fallon
Proceedings of The 7th Conference on Robot Learning, PMLR 229:1362-1377, 2023.

Abstract

Accurate camera to LiDAR (Light Detection and Ranging) extrinsic calibration is important for robotic tasks carrying out tight sensor fusion — such as target tracking and odometry. Calibration is typically performed before deployment in controlled conditions using calibration targets, however, this limits scalability and subsequent recalibration. We propose a novel approach for target-free camera-LiDAR calibration using end-to-end direct alignment which doesn’t need calibration targets. Our batched formulation enhances sample efficiency during training and robustness at inference time. We present experimental results, on publicly available real-world data, demonstrating 1.6cm/$0.07^{\circ}$ median accuracy when transferred to unseen sensors from held-out data sequences. We also show state-of-the-art zero-shot transfer to unseen cameras, LiDARs, and environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v229-fu23a, title = {Batch Differentiable Pose Refinement for In-The-Wild Camera/LiDAR Extrinsic Calibration}, author = {Fu, Lanke Frank Tarimo and Fallon, Maurice}, booktitle = {Proceedings of The 7th Conference on Robot Learning}, pages = {1362--1377}, year = {2023}, editor = {Tan, Jie and Toussaint, Marc and Darvish, Kourosh}, volume = {229}, series = {Proceedings of Machine Learning Research}, month = {06--09 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v229/fu23a/fu23a.pdf}, url = {https://proceedings.mlr.press/v229/fu23a.html}, abstract = {Accurate camera to LiDAR (Light Detection and Ranging) extrinsic calibration is important for robotic tasks carrying out tight sensor fusion — such as target tracking and odometry. Calibration is typically performed before deployment in controlled conditions using calibration targets, however, this limits scalability and subsequent recalibration. We propose a novel approach for target-free camera-LiDAR calibration using end-to-end direct alignment which doesn’t need calibration targets. Our batched formulation enhances sample efficiency during training and robustness at inference time. We present experimental results, on publicly available real-world data, demonstrating 1.6cm/$0.07^{\circ}$ median accuracy when transferred to unseen sensors from held-out data sequences. We also show state-of-the-art zero-shot transfer to unseen cameras, LiDARs, and environments.} }
Endnote
%0 Conference Paper %T Batch Differentiable Pose Refinement for In-The-Wild Camera/LiDAR Extrinsic Calibration %A Lanke Frank Tarimo Fu %A Maurice Fallon %B Proceedings of The 7th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Jie Tan %E Marc Toussaint %E Kourosh Darvish %F pmlr-v229-fu23a %I PMLR %P 1362--1377 %U https://proceedings.mlr.press/v229/fu23a.html %V 229 %X Accurate camera to LiDAR (Light Detection and Ranging) extrinsic calibration is important for robotic tasks carrying out tight sensor fusion — such as target tracking and odometry. Calibration is typically performed before deployment in controlled conditions using calibration targets, however, this limits scalability and subsequent recalibration. We propose a novel approach for target-free camera-LiDAR calibration using end-to-end direct alignment which doesn’t need calibration targets. Our batched formulation enhances sample efficiency during training and robustness at inference time. We present experimental results, on publicly available real-world data, demonstrating 1.6cm/$0.07^{\circ}$ median accuracy when transferred to unseen sensors from held-out data sequences. We also show state-of-the-art zero-shot transfer to unseen cameras, LiDARs, and environments.
APA
Fu, L.F.T. & Fallon, M.. (2023). Batch Differentiable Pose Refinement for In-The-Wild Camera/LiDAR Extrinsic Calibration. Proceedings of The 7th Conference on Robot Learning, in Proceedings of Machine Learning Research 229:1362-1377 Available from https://proceedings.mlr.press/v229/fu23a.html.

Related Material