Learning Control Admissibility Models with Graph Neural Networks for Multi-Agent Navigation

Chenning Yu, Hongzhan Yu, Sicun Gao
Proceedings of The 6th Conference on Robot Learning, PMLR 205:934-945, 2023.

Abstract

Deep reinforcement learning in continuous domains focuses on learning control policies that map states to distributions over actions that ideally concentrate on the optimal choices in each step. In multi-agent navigation problems, the optimal actions depend heavily on the agents’ density. Their interaction patterns grow exponentially with respect to such density, making it hard for learning-based methods to generalize. We propose to switch the learning objectives from predicting the optimal actions to predicting sets of admissible actions, which we call control admissibility models (CAMs), such that they can be easily composed and used for online inference for an arbitrary number of agents. We design CAMs using graph neural networks and develop training methods that optimize the CAMs in the standard model-free setting, with the additional benefit of eliminating the need for reward engineering typically required to balance collision avoidance and goal-reaching requirements. We evaluate the proposed approach in multi-agent navigation environments. We show that the CAM models can be trained in environments with only a few agents and be easily composed for deployment in dense environments with hundreds of agents, achieving better performance than state-of-the-art methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v205-yu23a, title = {Learning Control Admissibility Models with Graph Neural Networks for Multi-Agent Navigation}, author = {Yu, Chenning and Yu, Hongzhan and Gao, Sicun}, booktitle = {Proceedings of The 6th Conference on Robot Learning}, pages = {934--945}, year = {2023}, editor = {Liu, Karen and Kulic, Dana and Ichnowski, Jeff}, volume = {205}, series = {Proceedings of Machine Learning Research}, month = {14--18 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v205/yu23a/yu23a.pdf}, url = {https://proceedings.mlr.press/v205/yu23a.html}, abstract = {Deep reinforcement learning in continuous domains focuses on learning control policies that map states to distributions over actions that ideally concentrate on the optimal choices in each step. In multi-agent navigation problems, the optimal actions depend heavily on the agents’ density. Their interaction patterns grow exponentially with respect to such density, making it hard for learning-based methods to generalize. We propose to switch the learning objectives from predicting the optimal actions to predicting sets of admissible actions, which we call control admissibility models (CAMs), such that they can be easily composed and used for online inference for an arbitrary number of agents. We design CAMs using graph neural networks and develop training methods that optimize the CAMs in the standard model-free setting, with the additional benefit of eliminating the need for reward engineering typically required to balance collision avoidance and goal-reaching requirements. We evaluate the proposed approach in multi-agent navigation environments. We show that the CAM models can be trained in environments with only a few agents and be easily composed for deployment in dense environments with hundreds of agents, achieving better performance than state-of-the-art methods. } }
Endnote
%0 Conference Paper %T Learning Control Admissibility Models with Graph Neural Networks for Multi-Agent Navigation %A Chenning Yu %A Hongzhan Yu %A Sicun Gao %B Proceedings of The 6th Conference on Robot Learning %C Proceedings of Machine Learning Research %D 2023 %E Karen Liu %E Dana Kulic %E Jeff Ichnowski %F pmlr-v205-yu23a %I PMLR %P 934--945 %U https://proceedings.mlr.press/v205/yu23a.html %V 205 %X Deep reinforcement learning in continuous domains focuses on learning control policies that map states to distributions over actions that ideally concentrate on the optimal choices in each step. In multi-agent navigation problems, the optimal actions depend heavily on the agents’ density. Their interaction patterns grow exponentially with respect to such density, making it hard for learning-based methods to generalize. We propose to switch the learning objectives from predicting the optimal actions to predicting sets of admissible actions, which we call control admissibility models (CAMs), such that they can be easily composed and used for online inference for an arbitrary number of agents. We design CAMs using graph neural networks and develop training methods that optimize the CAMs in the standard model-free setting, with the additional benefit of eliminating the need for reward engineering typically required to balance collision avoidance and goal-reaching requirements. We evaluate the proposed approach in multi-agent navigation environments. We show that the CAM models can be trained in environments with only a few agents and be easily composed for deployment in dense environments with hundreds of agents, achieving better performance than state-of-the-art methods.
APA
Yu, C., Yu, H. & Gao, S.. (2023). Learning Control Admissibility Models with Graph Neural Networks for Multi-Agent Navigation. Proceedings of The 6th Conference on Robot Learning, in Proceedings of Machine Learning Research 205:934-945 Available from https://proceedings.mlr.press/v205/yu23a.html.

Related Material