[edit]
Fast Teammate Adaptation in the Presence of Sudden Policy Change
Proceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence, PMLR 216:2465-2476, 2023.
Abstract
Cooperative multi-agent reinforcement learning (MARL), where agents coordinates with teammate(s) for a shared goal, may sustain non-stationary caused by the policy change of teammates. Prior works mainly concentrate on the policy change cross episodes, ignoring the fact that teammates may suffer from sudden policy change within an episode, which might lead to miscoordination and poor performance. We formulate the problem as an open Dec-POMDP, where we control some agents to coordinate with uncontrolled teammates, whose policies could be changed within one episode. Then we develop a new framework \textit{\textbf{Fas}t \textbf{t}eammates \textbf{a}da\textbf{p}tation (\textbf{Fastap})} to address the problem. Concretely, we first train versatile teammates’ policies and assign them to different clusters via the Chinese Restaurant Process (CRP). Then, we train the controlled agent(s) to coordinate with the sampled uncontrolled teammates by capturing their identifications as context for fast adaptation. Finally, each agent applies its local information to anticipate the teammates’ context for decision-making accordingly. This process proceeds alternately, leading to a robust policy that can adapt to any teammates during the decentralized execution phase. We show in multiple multi-agent benchmarks that Fastap can achieve superior performance than multiple baselines in stationary and non-stationary scenarios.