Towards Generalizable and Interpretable Motion Prediction: A Deep Variational Bayes Approach

Juanwu Lu, Wei Zhan, Masayoshi Tomizuka, Yeping Hu
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:4717-4725, 2024.

Abstract

Estimating the potential behavior of the surrounding human-driven vehicles is crucial for the safety of autonomous vehicles in a mixed traffic flow. Recent state-of-the-art achieved accurate prediction using deep neural networks. However, these end-to-end models are usually black boxes with weak interpretability and generalizability. This paper proposes the Goal-based Neural Variational Agent (GNeVA), an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases. For interpretability, the model achieves target-driven motion prediction by estimating the spatial distribution of long-term destinations with a variational mixture of Gaussians. We identify a causal structure among maps and agents’ histories and derive a variational posterior to enhance generalizability. Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable and can achieve comparable performance to state-of-the-art results.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-lu24a, title = { Towards Generalizable and Interpretable Motion Prediction: A Deep Variational {B}ayes Approach }, author = {Lu, Juanwu and Zhan, Wei and Tomizuka, Masayoshi and Hu, Yeping}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {4717--4725}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/lu24a/lu24a.pdf}, url = {https://proceedings.mlr.press/v238/lu24a.html}, abstract = { Estimating the potential behavior of the surrounding human-driven vehicles is crucial for the safety of autonomous vehicles in a mixed traffic flow. Recent state-of-the-art achieved accurate prediction using deep neural networks. However, these end-to-end models are usually black boxes with weak interpretability and generalizability. This paper proposes the Goal-based Neural Variational Agent (GNeVA), an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases. For interpretability, the model achieves target-driven motion prediction by estimating the spatial distribution of long-term destinations with a variational mixture of Gaussians. We identify a causal structure among maps and agents’ histories and derive a variational posterior to enhance generalizability. Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable and can achieve comparable performance to state-of-the-art results. } }
Endnote
%0 Conference Paper %T Towards Generalizable and Interpretable Motion Prediction: A Deep Variational Bayes Approach %A Juanwu Lu %A Wei Zhan %A Masayoshi Tomizuka %A Yeping Hu %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-lu24a %I PMLR %P 4717--4725 %U https://proceedings.mlr.press/v238/lu24a.html %V 238 %X Estimating the potential behavior of the surrounding human-driven vehicles is crucial for the safety of autonomous vehicles in a mixed traffic flow. Recent state-of-the-art achieved accurate prediction using deep neural networks. However, these end-to-end models are usually black boxes with weak interpretability and generalizability. This paper proposes the Goal-based Neural Variational Agent (GNeVA), an interpretable generative model for motion prediction with robust generalizability to out-of-distribution cases. For interpretability, the model achieves target-driven motion prediction by estimating the spatial distribution of long-term destinations with a variational mixture of Gaussians. We identify a causal structure among maps and agents’ histories and derive a variational posterior to enhance generalizability. Experiments on motion prediction datasets validate that the fitted model can be interpretable and generalizable and can achieve comparable performance to state-of-the-art results.
APA
Lu, J., Zhan, W., Tomizuka, M. & Hu, Y.. (2024). Towards Generalizable and Interpretable Motion Prediction: A Deep Variational Bayes Approach . Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:4717-4725 Available from https://proceedings.mlr.press/v238/lu24a.html.

Related Material