Independent Natural Policy Gradient always converges in Markov Potential Games

Roy Fox, Stephen M. Mcaleer, Will Overman, Ioannis Panageas
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:4414-4425, 2022.

Abstract

Natural policy gradient has emerged as one of the most successful algorithms for computing optimal policies in challenging Reinforcement Learning (RL) tasks, yet, very little was known about its convergence properties until recently. The picture becomes more blurry when it comes to multi-agent RL (MARL); the line of works that have theoretical guarantees for convergence to Nash policies are very limited. In this paper, we focus on a particular class of multi-agent stochastic games called Markov Potential Games and we prove that Independent Natural Policy Gradient always converges using constant learning rates. The proof deviates from the existing approaches and the main challenge lies in the fact that Markov potential Games do not have unique optimal values (as single-agent settings exhibit) so different initializations can lead to different limit point values. We complement our theoretical results with experiments that indicate that Natural Policy Gradient outperforms Policy Gradient in MARL settings (our process benchmark is multi-state congestion games).

Cite this Paper


BibTeX
@InProceedings{pmlr-v151-fox22a, title = { Independent Natural Policy Gradient always converges in Markov Potential Games }, author = {Fox, Roy and Mcaleer, Stephen M. and Overman, Will and Panageas, Ioannis}, booktitle = {Proceedings of The 25th International Conference on Artificial Intelligence and Statistics}, pages = {4414--4425}, year = {2022}, editor = {Camps-Valls, Gustau and Ruiz, Francisco J. R. and Valera, Isabel}, volume = {151}, series = {Proceedings of Machine Learning Research}, month = {28--30 Mar}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v151/fox22a/fox22a.pdf}, url = {https://proceedings.mlr.press/v151/fox22a.html}, abstract = { Natural policy gradient has emerged as one of the most successful algorithms for computing optimal policies in challenging Reinforcement Learning (RL) tasks, yet, very little was known about its convergence properties until recently. The picture becomes more blurry when it comes to multi-agent RL (MARL); the line of works that have theoretical guarantees for convergence to Nash policies are very limited. In this paper, we focus on a particular class of multi-agent stochastic games called Markov Potential Games and we prove that Independent Natural Policy Gradient always converges using constant learning rates. The proof deviates from the existing approaches and the main challenge lies in the fact that Markov potential Games do not have unique optimal values (as single-agent settings exhibit) so different initializations can lead to different limit point values. We complement our theoretical results with experiments that indicate that Natural Policy Gradient outperforms Policy Gradient in MARL settings (our process benchmark is multi-state congestion games). } }
Endnote
%0 Conference Paper %T Independent Natural Policy Gradient always converges in Markov Potential Games %A Roy Fox %A Stephen M. Mcaleer %A Will Overman %A Ioannis Panageas %B Proceedings of The 25th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2022 %E Gustau Camps-Valls %E Francisco J. R. Ruiz %E Isabel Valera %F pmlr-v151-fox22a %I PMLR %P 4414--4425 %U https://proceedings.mlr.press/v151/fox22a.html %V 151 %X Natural policy gradient has emerged as one of the most successful algorithms for computing optimal policies in challenging Reinforcement Learning (RL) tasks, yet, very little was known about its convergence properties until recently. The picture becomes more blurry when it comes to multi-agent RL (MARL); the line of works that have theoretical guarantees for convergence to Nash policies are very limited. In this paper, we focus on a particular class of multi-agent stochastic games called Markov Potential Games and we prove that Independent Natural Policy Gradient always converges using constant learning rates. The proof deviates from the existing approaches and the main challenge lies in the fact that Markov potential Games do not have unique optimal values (as single-agent settings exhibit) so different initializations can lead to different limit point values. We complement our theoretical results with experiments that indicate that Natural Policy Gradient outperforms Policy Gradient in MARL settings (our process benchmark is multi-state congestion games).
APA
Fox, R., Mcaleer, S.M., Overman, W. & Panageas, I.. (2022). Independent Natural Policy Gradient always converges in Markov Potential Games . Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 151:4414-4425 Available from https://proceedings.mlr.press/v151/fox22a.html.

Related Material