LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion

Meet Shah, Zhiling Huang, Ankit Laddha, Matthew Langford, Blake Barber, sida zhang, Carlos Vallespi-Gonzalez, Raquel Urtasun
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:31-48, 2021.

Abstract

In this paper, we present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and HD maps. Automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous radial velocity measurements. However, there are factors that make the fusion of lidar and radar information challenging, such as the relatively low angular resolution of radar measurements, their sparsity and the lack of exact time synchronization with lidar. To overcome these challenges, we propose an efficient spatio-temporal radar feature extraction scheme which achieves state-of-the-art performance on multiple large-scale datasets. Further, by incorporating radar information, we show a 52% reduction in prediction error for objects with high acceleration and a 16% reduction in prediction error for objects at longer range.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-shah21a, title = {LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion}, author = {Shah, Meet and Huang, Zhiling and Laddha, Ankit and Langford, Matthew and Barber, Blake and zhang, sida and Vallespi-Gonzalez, Carlos and Urtasun, Raquel}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {31--48}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/shah21a/shah21a.pdf}, url = {https://proceedings.mlr.press/v155/shah21a.html}, abstract = {In this paper, we present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and HD maps. Automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous radial velocity measurements. However, there are factors that make the fusion of lidar and radar information challenging, such as the relatively low angular resolution of radar measurements, their sparsity and the lack of exact time synchronization with lidar. To overcome these challenges, we propose an efficient spatio-temporal radar feature extraction scheme which achieves state-of-the-art performance on multiple large-scale datasets. Further, by incorporating radar information, we show a 52% reduction in prediction error for objects with high acceleration and a 16% reduction in prediction error for objects at longer range.} }
Endnote
%0 Conference Paper %T LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion %A Meet Shah %A Zhiling Huang %A Ankit Laddha %A Matthew Langford %A Blake Barber %A sida zhang %A Carlos Vallespi-Gonzalez %A Raquel Urtasun %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-shah21a %I PMLR %P 31--48 %U https://proceedings.mlr.press/v155/shah21a.html %V 155 %X In this paper, we present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and HD maps. Automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous radial velocity measurements. However, there are factors that make the fusion of lidar and radar information challenging, such as the relatively low angular resolution of radar measurements, their sparsity and the lack of exact time synchronization with lidar. To overcome these challenges, we propose an efficient spatio-temporal radar feature extraction scheme which achieves state-of-the-art performance on multiple large-scale datasets. Further, by incorporating radar information, we show a 52% reduction in prediction error for objects with high acceleration and a 16% reduction in prediction error for objects at longer range.
APA
Shah, M., Huang, Z., Laddha, A., Langford, M., Barber, B., zhang, s., Vallespi-Gonzalez, C. & Urtasun, R.. (2021). LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar Fusion. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:31-48 Available from https://proceedings.mlr.press/v155/shah21a.html.

Related Material