IntentNet: Learning to Predict Intention from Raw Sensor Data

Sergio Casas, Wenjie Luo, Raquel Urtasun
Proceedings of The 2nd Conference on Robot Learning, PMLR 87:947-956, 2018.

Abstract

In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v87-casas18a, title = {IntentNet: Learning to Predict Intention from Raw Sensor Data}, author = {Casas, Sergio and Luo, Wenjie and Urtasun, Raquel}, booktitle = {Proceedings of The 2nd Conference on Robot Learning}, pages = {947--956}, year = {2018}, editor = {Billard, Aude and Dragan, Anca and Peters, Jan and Morimoto, Jun}, volume = {87}, series = {Proceedings of Machine Learning Research}, month = {29--31 Oct}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v87/casas18a/casas18a.pdf}, url = {https://proceedings.mlr.press/v87/casas18a.html}, abstract = {In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications. } }
Endnote
%0 Conference Paper %T IntentNet: Learning to Predict Intention from Raw Sensor Data %A Sergio Casas %A Wenjie Luo %A Raquel Urtasun %B Proceedings of The 2nd Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2018 %E Aude Billard %E Anca Dragan %E Jan Peters %E Jun Morimoto %F pmlr-v87-casas18a %I PMLR %P 947--956 %U https://proceedings.mlr.press/v87/casas18a.html %V 87 %X In order to plan a safe maneuver, self-driving vehicles need to understand the intent of other traffic participants. We define intent as a combination of discrete high level behaviors as well as continuous trajectories describing future motion. In this paper we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment. Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reduce reaction time in self-driving applications.
APA
Casas, S., Luo, W. & Urtasun, R.. (2018). IntentNet: Learning to Predict Intention from Raw Sensor Data. Proceedings of The 2nd Conference on Robot Learning, in Proceedings of Machine Learning Research 87:947-956 Available from https://proceedings.mlr.press/v87/casas18a.html.

Related Material