Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models

Jose Luis Vazquez Espinoza, Alexander Liniger, Wilko Schwarting, Daniela Rus, Luc Van Gool
Proceedings of The 4th Annual Learning for Dynamics and Control Conference, PMLR 168:1006-1019, 2022.

Abstract

In most classical Autonomous Vehicle (AV) stacks, the prediction and planning layers are separated, limiting the planner to react to predictions that are not informed by the planned trajectory of the AV. This work presents a module that tightly couples these layers via a game-theoretic Model Predictive Controller (MPC) that uses a novel interactive multi-agent neural network policy as part of its predictive model. In our setting, the MPC planner considers all the surrounding agents by informing the multi-agent policy with the planned state sequence. Fundamental to the success of our method is the design of a novel multi-agent policy network that can steer a vehicle given the state of the surrounding agents and the map information. The policy network is trained implicitly with ground-truth observation data using backpropagation through time and a differentiable dynamics model to roll out the trajectory forward in time. Finally, we show that our multi-agent policy network learns to drive while interacting with the environment, and, when combined with the game-theoretic MPC planner, can successfully generate interactive behaviors.

Cite this Paper


BibTeX
@InProceedings{pmlr-v168-espinoza22a, title = {Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models}, author = {Espinoza, Jose Luis Vazquez and Liniger, Alexander and Schwarting, Wilko and Rus, Daniela and Gool, Luc Van}, booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference}, pages = {1006--1019}, year = {2022}, editor = {Firoozi, Roya and Mehr, Negar and Yel, Esen and Antonova, Rika and Bohg, Jeannette and Schwager, Mac and Kochenderfer, Mykel}, volume = {168}, series = {Proceedings of Machine Learning Research}, month = {23--24 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v168/espinoza22a/espinoza22a.pdf}, url = {https://proceedings.mlr.press/v168/espinoza22a.html}, abstract = {In most classical Autonomous Vehicle (AV) stacks, the prediction and planning layers are separated, limiting the planner to react to predictions that are not informed by the planned trajectory of the AV. This work presents a module that tightly couples these layers via a game-theoretic Model Predictive Controller (MPC) that uses a novel interactive multi-agent neural network policy as part of its predictive model. In our setting, the MPC planner considers all the surrounding agents by informing the multi-agent policy with the planned state sequence. Fundamental to the success of our method is the design of a novel multi-agent policy network that can steer a vehicle given the state of the surrounding agents and the map information. The policy network is trained implicitly with ground-truth observation data using backpropagation through time and a differentiable dynamics model to roll out the trajectory forward in time. Finally, we show that our multi-agent policy network learns to drive while interacting with the environment, and, when combined with the game-theoretic MPC planner, can successfully generate interactive behaviors.} }
Endnote
%0 Conference Paper %T Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models %A Jose Luis Vazquez Espinoza %A Alexander Liniger %A Wilko Schwarting %A Daniela Rus %A Luc Van Gool %B Proceedings of The 4th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2022 %E Roya Firoozi %E Negar Mehr %E Esen Yel %E Rika Antonova %E Jeannette Bohg %E Mac Schwager %E Mykel Kochenderfer %F pmlr-v168-espinoza22a %I PMLR %P 1006--1019 %U https://proceedings.mlr.press/v168/espinoza22a.html %V 168 %X In most classical Autonomous Vehicle (AV) stacks, the prediction and planning layers are separated, limiting the planner to react to predictions that are not informed by the planned trajectory of the AV. This work presents a module that tightly couples these layers via a game-theoretic Model Predictive Controller (MPC) that uses a novel interactive multi-agent neural network policy as part of its predictive model. In our setting, the MPC planner considers all the surrounding agents by informing the multi-agent policy with the planned state sequence. Fundamental to the success of our method is the design of a novel multi-agent policy network that can steer a vehicle given the state of the surrounding agents and the map information. The policy network is trained implicitly with ground-truth observation data using backpropagation through time and a differentiable dynamics model to roll out the trajectory forward in time. Finally, we show that our multi-agent policy network learns to drive while interacting with the environment, and, when combined with the game-theoretic MPC planner, can successfully generate interactive behaviors.
APA
Espinoza, J.L.V., Liniger, A., Schwarting, W., Rus, D. & Gool, L.V.. (2022). Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models. Proceedings of The 4th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 168:1006-1019 Available from https://proceedings.mlr.press/v168/espinoza22a.html.

Related Material