Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination

Kunal Jha, Wilka Carvalho, Yancheng Liang, Simon Shaolei Du, Max Kleiman-Weiner, Natasha Jaques
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:27198-27220, 2025.

Abstract

Zero-shot coordination (ZSC), the ability to adapt to a new partner in a cooperative task, is a critical component of human-compatible AI. While prior work has focused on training agents to cooperate on a single task, these specialized models do not generalize to new tasks, even if they are highly similar. Here, we study how reinforcement learning on a distribution of environments with a single partner enables learning general cooperative skills that support ZSC with many new partners on many new problems. We introduce two Jax-based, procedural generators that create billions of solvable coordination challenges. We develop a new paradigm called Cross-Environment Cooperation (CEC), and show that it outperforms competitive baselines quantitatively and qualitatively when collaborating with real people. Our findings suggest that learning to collaborate across many unique scenarios encourages agents to develop general norms, which prove effective for collaboration with different partners. Together, our results suggest a new route toward designing generalist cooperative agents capable of interacting with humans without requiring human data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-jha25b, title = {Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination}, author = {Jha, Kunal and Carvalho, Wilka and Liang, Yancheng and Du, Simon Shaolei and Kleiman-Weiner, Max and Jaques, Natasha}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {27198--27220}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/jha25b/jha25b.pdf}, url = {https://proceedings.mlr.press/v267/jha25b.html}, abstract = {Zero-shot coordination (ZSC), the ability to adapt to a new partner in a cooperative task, is a critical component of human-compatible AI. While prior work has focused on training agents to cooperate on a single task, these specialized models do not generalize to new tasks, even if they are highly similar. Here, we study how reinforcement learning on a distribution of environments with a single partner enables learning general cooperative skills that support ZSC with many new partners on many new problems. We introduce two Jax-based, procedural generators that create billions of solvable coordination challenges. We develop a new paradigm called Cross-Environment Cooperation (CEC), and show that it outperforms competitive baselines quantitatively and qualitatively when collaborating with real people. Our findings suggest that learning to collaborate across many unique scenarios encourages agents to develop general norms, which prove effective for collaboration with different partners. Together, our results suggest a new route toward designing generalist cooperative agents capable of interacting with humans without requiring human data.} }
Endnote
%0 Conference Paper %T Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination %A Kunal Jha %A Wilka Carvalho %A Yancheng Liang %A Simon Shaolei Du %A Max Kleiman-Weiner %A Natasha Jaques %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-jha25b %I PMLR %P 27198--27220 %U https://proceedings.mlr.press/v267/jha25b.html %V 267 %X Zero-shot coordination (ZSC), the ability to adapt to a new partner in a cooperative task, is a critical component of human-compatible AI. While prior work has focused on training agents to cooperate on a single task, these specialized models do not generalize to new tasks, even if they are highly similar. Here, we study how reinforcement learning on a distribution of environments with a single partner enables learning general cooperative skills that support ZSC with many new partners on many new problems. We introduce two Jax-based, procedural generators that create billions of solvable coordination challenges. We develop a new paradigm called Cross-Environment Cooperation (CEC), and show that it outperforms competitive baselines quantitatively and qualitatively when collaborating with real people. Our findings suggest that learning to collaborate across many unique scenarios encourages agents to develop general norms, which prove effective for collaboration with different partners. Together, our results suggest a new route toward designing generalist cooperative agents capable of interacting with humans without requiring human data.
APA
Jha, K., Carvalho, W., Liang, Y., Du, S.S., Kleiman-Weiner, M. & Jaques, N.. (2025). Cross-environment Cooperation Enables Zero-shot Multi-agent Coordination. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:27198-27220 Available from https://proceedings.mlr.press/v267/jha25b.html.

Related Material