Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians

Bruno Ferreira de Brito, Hai Zhu, Wei Pan, Javier Alonso-Mora
Proceedings of the 2020 Conference on Robot Learning, PMLR 155:862-872, 2021.

Abstract

Prediction of human motions is key for safe navigation of autonomous robots among humans. In cluttered environments, several motion hypotheses may exist for a pedestrian, due to its interactions with the environment and other pedestrians. Previous works for estimating multiple motion hypotheses require a large number of samples which limits their applicability in real-time motion planning. In this paper, we present a variational learning approach for interaction-aware and multi-modal trajectory prediction based on deep generative neural networks. Our approach can achieve faster convergence and requires significantly fewer samples comparing to state-of-the-art methods. Experimental results on real and simulation data show that our model can effectively learn to infer different trajectories. We compare our method with three baseline approaches and present performance results demonstrating that our generative model can achieve higher accuracy for trajectory prediction by producing diverse trajectories.

Cite this Paper


BibTeX
@InProceedings{pmlr-v155-brito21a, title = {Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians}, author = {Brito, Bruno Ferreira de and Zhu, Hai and Pan, Wei and Alonso-Mora, Javier}, booktitle = {Proceedings of the 2020 Conference on Robot Learning}, pages = {862--872}, year = {2021}, editor = {Kober, Jens and Ramos, Fabio and Tomlin, Claire}, volume = {155}, series = {Proceedings of Machine Learning Research}, month = {16--18 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v155/brito21a/brito21a.pdf}, url = {https://proceedings.mlr.press/v155/brito21a.html}, abstract = {Prediction of human motions is key for safe navigation of autonomous robots among humans. In cluttered environments, several motion hypotheses may exist for a pedestrian, due to its interactions with the environment and other pedestrians. Previous works for estimating multiple motion hypotheses require a large number of samples which limits their applicability in real-time motion planning. In this paper, we present a variational learning approach for interaction-aware and multi-modal trajectory prediction based on deep generative neural networks. Our approach can achieve faster convergence and requires significantly fewer samples comparing to state-of-the-art methods. Experimental results on real and simulation data show that our model can effectively learn to infer different trajectories. We compare our method with three baseline approaches and present performance results demonstrating that our generative model can achieve higher accuracy for trajectory prediction by producing diverse trajectories.} }
Endnote
%0 Conference Paper %T Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians %A Bruno Ferreira de Brito %A Hai Zhu %A Wei Pan %A Javier Alonso-Mora %B Proceedings of the 2020 Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2021 %E Jens Kober %E Fabio Ramos %E Claire Tomlin %F pmlr-v155-brito21a %I PMLR %P 862--872 %U https://proceedings.mlr.press/v155/brito21a.html %V 155 %X Prediction of human motions is key for safe navigation of autonomous robots among humans. In cluttered environments, several motion hypotheses may exist for a pedestrian, due to its interactions with the environment and other pedestrians. Previous works for estimating multiple motion hypotheses require a large number of samples which limits their applicability in real-time motion planning. In this paper, we present a variational learning approach for interaction-aware and multi-modal trajectory prediction based on deep generative neural networks. Our approach can achieve faster convergence and requires significantly fewer samples comparing to state-of-the-art methods. Experimental results on real and simulation data show that our model can effectively learn to infer different trajectories. We compare our method with three baseline approaches and present performance results demonstrating that our generative model can achieve higher accuracy for trajectory prediction by producing diverse trajectories.
APA
Brito, B.F.d., Zhu, H., Pan, W. & Alonso-Mora, J.. (2021). Social-VRNN: One-Shot Multi-modal Trajectory Prediction for Interacting Pedestrians. Proceedings of the 2020 Conference on Robot Learning, in Proceedings of Machine Learning Research 155:862-872 Available from https://proceedings.mlr.press/v155/brito21a.html.

Related Material