Fast Teammate Adaptation in the Presence of Sudden Policy Change

Ziqian Zhang, Lei Yuan, Lihe Li, Ke Xue, Chengxing Jia, Cong Guan, Chao Qian, Yang Yu
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:2465-2476, 2023.

Abstract

Cooperative multi-agent reinforcement learning (MARL), where agents coordinates with teammate(s) for a shared goal, may sustain non-stationary caused by the policy change of teammates. Prior works mainly concentrate on the policy change cross episodes, ignoring the fact that teammates may suffer from sudden policy change within an episode, which might lead to miscoordination and poor performance. We formulate the problem as an open Dec-POMDP, where we control some agents to coordinate with uncontrolled teammates, whose policies could be changed within one episode. Then we develop a new framework \textit{\textbf{Fas}t \textbf{t}eammates \textbf{a}da\textbf{p}tation (\textbf{Fastap})} to address the problem. Concretely, we first train versatile teammates’ policies and assign them to different clusters via the Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to coordinate with the sampled uncontrolled teammates by capturing their identifications as context for fast adaptation. Finally, each agent applies its local information to anticipate the teammates’ context for decision-making accordingly. This process proceeds alternately, leading to a robust policy that can adapt to any teammates during the decentralized execution phase. We show in multiple multi-agent benchmarks that Fastap can achieve superior performance than multiple baselines in stationary and non-stationary scenarios.

Cite this Paper


BibTeX
@InProceedings{pmlr-v216-zhang23a, title = {Fast Teammate Adaptation in the Presence of Sudden Policy Change}, author = {Zhang, Ziqian and Yuan, Lei and Li, Lihe and Xue, Ke and Jia, Chengxing and Guan, Cong and Qian, Chao and Yu, Yang}, booktitle = {Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence}, pages = {2465--2476}, year = {2023}, editor = {Evans, Robin J. and Shpitser, Ilya}, volume = {216}, series = {Proceedings of Machine Learning Research}, month = {31 Jul--04 Aug}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v216/zhang23a/zhang23a.pdf}, url = {https://proceedings.mlr.press/v216/zhang23a.html}, abstract = { Cooperative multi-agent reinforcement learning (MARL), where agents coordinates with teammate(s) for a shared goal, may sustain non-stationary caused by the policy change of teammates. Prior works mainly concentrate on the policy change cross episodes, ignoring the fact that teammates may suffer from sudden policy change within an episode, which might lead to miscoordination and poor performance. We formulate the problem as an open Dec-POMDP, where we control some agents to coordinate with uncontrolled teammates, whose policies could be changed within one episode. Then we develop a new framework \textit{\textbf{Fas}t \textbf{t}eammates \textbf{a}da\textbf{p}tation (\textbf{Fastap})} to address the problem. Concretely, we first train versatile teammates’ policies and assign them to different clusters via the Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to coordinate with the sampled uncontrolled teammates by capturing their identifications as context for fast adaptation. Finally, each agent applies its local information to anticipate the teammates’ context for decision-making accordingly. This process proceeds alternately, leading to a robust policy that can adapt to any teammates during the decentralized execution phase. We show in multiple multi-agent benchmarks that Fastap can achieve superior performance than multiple baselines in stationary and non-stationary scenarios. } }
Endnote
%0 Conference Paper %T Fast Teammate Adaptation in the Presence of Sudden Policy Change %A Ziqian Zhang %A Lei Yuan %A Lihe Li %A Ke Xue %A Chengxing Jia %A Cong Guan %A Chao Qian %A Yang Yu %B Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2023 %E Robin J. Evans %E Ilya Shpitser %F pmlr-v216-zhang23a %I PMLR %P 2465--2476 %U https://proceedings.mlr.press/v216/zhang23a.html %V 216 %X Cooperative multi-agent reinforcement learning (MARL), where agents coordinates with teammate(s) for a shared goal, may sustain non-stationary caused by the policy change of teammates. Prior works mainly concentrate on the policy change cross episodes, ignoring the fact that teammates may suffer from sudden policy change within an episode, which might lead to miscoordination and poor performance. We formulate the problem as an open Dec-POMDP, where we control some agents to coordinate with uncontrolled teammates, whose policies could be changed within one episode. Then we develop a new framework \textit{\textbf{Fas}t \textbf{t}eammates \textbf{a}da\textbf{p}tation (\textbf{Fastap})} to address the problem. Concretely, we first train versatile teammates’ policies and assign them to different clusters via the Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to coordinate with the sampled uncontrolled teammates by capturing their identifications as context for fast adaptation. Finally, each agent applies its local information to anticipate the teammates’ context for decision-making accordingly. This process proceeds alternately, leading to a robust policy that can adapt to any teammates during the decentralized execution phase. We show in multiple multi-agent benchmarks that Fastap can achieve superior performance than multiple baselines in stationary and non-stationary scenarios.
APA
Zhang, Z., Yuan, L., Li, L., Xue, K., Jia, C., Guan, C., Qian, C. & Yu, Y.. (2023). Fast Teammate Adaptation in the Presence of Sudden Policy Change. Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 216:2465-2476 Available from https://proceedings.mlr.press/v216/zhang23a.html.

Related Material