Is Anyone There? Learning a Planner Contingent on Perceptual Uncertainty

Charles Packer, Nicholas Rhinehart, Rowan Thomas McAllister, Matthew A. Wright, Xin Wang, Jeff He, Sergey Levine, Joseph E. Gonzalez
Proceedings of The 6th Conference on Robot Learning, PMLR 205:1607-1617, 2023.

Abstract

Robots in complex multi-agent environments should reason about the intentions of observed and currently unobserved agents. In this paper, we present a new learning-based method for prediction and planning in complex multi-agent environments where the states of the other agents are partially-observed. Our approach, Active Visual Planning (AVP), uses high-dimensional observations to learn a flow-based generative model of multi-agent joint trajectories, including unobserved agents that may be revealed in the near future, depending on the robot’s actions. Our predictive model is implemented using deep neural networks that map raw observations to future detection and pose trajectories and is learned entirely offline using a dataset of recorded observations (not ground-truth states). Once learned, our predictive model can be used for contingency planning over the potential existence, intentions, and positions of unobserved agents. We demonstrate the effectiveness of AVP on a set of autonomous driving environments inspired by real-world scenarios that require reasoning about the existence of other unobserved agents for safe and efficient driving. In these environments, AVP achieves optimal closed-loop performance, while methods that do not reason about potential unobserved agents exhibit either overconfident or underconfident behavior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-packer23a, title = {Is Anyone There? Learning a Planner Contingent on Perceptual Uncertainty}, author = {Packer, Charles and Rhinehart, Nicholas and McAllister, Rowan Thomas and Wright, Matthew A. and Wang, Xin and He, Jeff and Levine, Sergey and Gonzalez, Joseph E.}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {1607--1617}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/packer23a/packer23a.pdf}, url = {https://proceedings.mlr.press/v205/packer23a.html}, abstract = {Robots in complex multi-agent environments should reason about the intentions of observed and currently unobserved agents. In this paper, we present a new learning-based method for prediction and planning in complex multi-agent environments where the states of the other agents are partially-observed. Our approach, Active Visual Planning (AVP), uses high-dimensional observations to learn a flow-based generative model of multi-agent joint trajectories, including unobserved agents that may be revealed in the near future, depending on the robot’s actions. Our predictive model is implemented using deep neural networks that map raw observations to future detection and pose trajectories and is learned entirely offline using a dataset of recorded observations (not ground-truth states). Once learned, our predictive model can be used for contingency planning over the potential existence, intentions, and positions of unobserved agents. We demonstrate the effectiveness of AVP on a set of autonomous driving environments inspired by real-world scenarios that require reasoning about the existence of other unobserved agents for safe and efficient driving. In these environments, AVP achieves optimal closed-loop performance, while methods that do not reason about potential unobserved agents exhibit either overconfident or underconfident behavior.} }
Endnote
%0 Conference Paper %T Is Anyone There? Learning a Planner Contingent on Perceptual Uncertainty %A Charles Packer %A Nicholas Rhinehart %A Rowan Thomas McAllister %A Matthew A. Wright %A Xin Wang %A Jeff He %A Sergey Levine %A Joseph E. Gonzalez %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-packer23a %I PMLR %P 1607--1617 %U https://proceedings.mlr.press/v205/packer23a.html %V 205 %X Robots in complex multi-agent environments should reason about the intentions of observed and currently unobserved agents. In this paper, we present a new learning-based method for prediction and planning in complex multi-agent environments where the states of the other agents are partially-observed. Our approach, Active Visual Planning (AVP), uses high-dimensional observations to learn a flow-based generative model of multi-agent joint trajectories, including unobserved agents that may be revealed in the near future, depending on the robot’s actions. Our predictive model is implemented using deep neural networks that map raw observations to future detection and pose trajectories and is learned entirely offline using a dataset of recorded observations (not ground-truth states). Once learned, our predictive model can be used for contingency planning over the potential existence, intentions, and positions of unobserved agents. We demonstrate the effectiveness of AVP on a set of autonomous driving environments inspired by real-world scenarios that require reasoning about the existence of other unobserved agents for safe and efficient driving. In these environments, AVP achieves optimal closed-loop performance, while methods that do not reason about potential unobserved agents exhibit either overconfident or underconfident behavior.
APA
Packer, C., Rhinehart, N., McAllister, R.T., Wright, M.A., Wang, X., He, J., Levine, S. & Gonzalez, J.E.. (2023). Is Anyone There? Learning a Planner Contingent on Perceptual Uncertainty. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:1607-1617 Available from https://proceedings.mlr.press/v205/packer23a.html.

Related Material