Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling

Jianpeng Yao, Xiaopan Zhang, Yu Xia, Zejin Wang, Amit Roy-Chowdhury, Jiachen Li
Proceedings of The 9th Conference on Robot Learning, PMLR 305:4206-4225, 2025.

Abstract

Mobile robots navigating in crowds trained using reinforcement learning are known to suffer performance degradation when faced with out-of-distribution scenarios. We propose that by properly accounting for the uncertainties of pedestrians, a robot can learn safe navigation policies that are robust to distribution shifts. Our method augments agent observations with prediction uncertainty estimates generated by adaptive conformal inference, and it uses these estimates to guide the agent’s behavior through constrained reinforcement learning. The system helps regulate the agent’s actions and enables it to adapt to distribution shifts. In the in-distribution setting, our approach achieves a 96.93% success rate, which is over 8.80% higher than the previous state-of-the-art baselines with over 3.72 times fewer collisions and 2.43 times fewer intrusions into ground-truth human future trajectories. In three out-of-distribution scenarios, our method shows much stronger robustness when facing distribution shifts in velocity variations, policy changes, and transitions from individual to group dynamics. We deploy our method on a real robot, and experiments show that the robot makes safe and robust decisions when interacting with both sparse and dense crowds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v305-yao25a, title = {Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling}, author = {Yao, Jianpeng and Zhang, Xiaopan and Xia, Yu and Wang, Zejin and Roy-Chowdhury, Amit and Li, Jiachen}, booktitle = {Proceedings of The 9th Conference on Robot Learning}, pages = {4206--4225}, year = {2025}, editor = {Lim, Joseph and Song, Shuran and Park, Hae-Won}, volume = {305}, series = {Proceedings of Machine Learning Research}, month = {27--30 Sep}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v305/main/assets/yao25a/yao25a.pdf}, url = {https://proceedings.mlr.press/v305/yao25a.html}, abstract = {Mobile robots navigating in crowds trained using reinforcement learning are known to suffer performance degradation when faced with out-of-distribution scenarios. We propose that by properly accounting for the uncertainties of pedestrians, a robot can learn safe navigation policies that are robust to distribution shifts. Our method augments agent observations with prediction uncertainty estimates generated by adaptive conformal inference, and it uses these estimates to guide the agent’s behavior through constrained reinforcement learning. The system helps regulate the agent’s actions and enables it to adapt to distribution shifts. In the in-distribution setting, our approach achieves a 96.93% success rate, which is over 8.80% higher than the previous state-of-the-art baselines with over 3.72 times fewer collisions and 2.43 times fewer intrusions into ground-truth human future trajectories. In three out-of-distribution scenarios, our method shows much stronger robustness when facing distribution shifts in velocity variations, policy changes, and transitions from individual to group dynamics. We deploy our method on a real robot, and experiments show that the robot makes safe and robust decisions when interacting with both sparse and dense crowds.} }
Endnote
%0 Conference Paper %T Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling %A Jianpeng Yao %A Xiaopan Zhang %A Yu Xia %A Zejin Wang %A Amit Roy-Chowdhury %A Jiachen Li %B Proceedings of The 9th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2025 %E Joseph Lim %E Shuran Song %E Hae-Won Park %F pmlr-v305-yao25a %I PMLR %P 4206--4225 %U https://proceedings.mlr.press/v305/yao25a.html %V 305 %X Mobile robots navigating in crowds trained using reinforcement learning are known to suffer performance degradation when faced with out-of-distribution scenarios. We propose that by properly accounting for the uncertainties of pedestrians, a robot can learn safe navigation policies that are robust to distribution shifts. Our method augments agent observations with prediction uncertainty estimates generated by adaptive conformal inference, and it uses these estimates to guide the agent’s behavior through constrained reinforcement learning. The system helps regulate the agent’s actions and enables it to adapt to distribution shifts. In the in-distribution setting, our approach achieves a 96.93% success rate, which is over 8.80% higher than the previous state-of-the-art baselines with over 3.72 times fewer collisions and 2.43 times fewer intrusions into ground-truth human future trajectories. In three out-of-distribution scenarios, our method shows much stronger robustness when facing distribution shifts in velocity variations, policy changes, and transitions from individual to group dynamics. We deploy our method on a real robot, and experiments show that the robot makes safe and robust decisions when interacting with both sparse and dense crowds.
APA
Yao, J., Zhang, X., Xia, Y., Wang, Z., Roy-Chowdhury, A. & Li, J.. (2025). Towards Generalizable Safety in Crowd Navigation via Conformal Uncertainty Handling. Proceedings of The 9th Conference on Robot Learning, in Proceedings of Machine Learning Research 305:4206-4225 Available from https://proceedings.mlr.press/v305/yao25a.html.

Related Material