Convergence Analysis of Gradient-Based Learning in Continuous Games

Benjamin Chasnov, Lillian Ratliff, Eric Mazumdar, Samuel Burden
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:935-944, 2020.

Abstract

Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria.

Cite this Paper


BibTeX
@InProceedings{pmlr-v115-chasnov20a, title = {Convergence Analysis of Gradient-Based Learning in Continuous Games}, author = {Chasnov, Benjamin and Ratliff, Lillian and Mazumdar, Eric and Burden, Samuel}, booktitle = {Proceedings of The 35th Uncertainty in Artificial Intelligence Conference}, pages = {935--944}, year = {2020}, editor = {Adams, Ryan P. and Gogate, Vibhav}, volume = {115}, series = {Proceedings of Machine Learning Research}, month = {22--25 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v115/chasnov20a/chasnov20a.pdf}, url = {https://proceedings.mlr.press/v115/chasnov20a.html}, abstract = {Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria.} }
Endnote
%0 Conference Paper %T Convergence Analysis of Gradient-Based Learning in Continuous Games %A Benjamin Chasnov %A Lillian Ratliff %A Eric Mazumdar %A Samuel Burden %B Proceedings of The 35th Uncertainty in Artificial Intelligence Conference %C Proceedings of Machine Learning Research %D 2020 %E Ryan P. Adams %E Vibhav Gogate %F pmlr-v115-chasnov20a %I PMLR %P 935--944 %U https://proceedings.mlr.press/v115/chasnov20a.html %V 115 %X Considering a class of gradient-based multi-agent learning algorithms in non-cooperative settings, we provide convergence guarantees to a neighborhood of a stable Nash equilibrium. In particular, we consider continuous games where agents learn in 1) deterministic settings with oracle access to their gradient and 2) stochastic settings with an unbiased estimator of their gradient. We also study the effects of non-uniform learning rates, which causes a distortion of the vector field that can alter which equilibrium the agents converge to and the path they take. We support the analysis with numerical examples that provide insight into how one might synthesize games to achieve desired equilibria.
APA
Chasnov, B., Ratliff, L., Mazumdar, E. & Burden, S.. (2020). Convergence Analysis of Gradient-Based Learning in Continuous Games. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research 115:935-944 Available from https://proceedings.mlr.press/v115/chasnov20a.html.

Related Material