Extra-gradient with player sampling for faster convergence in n-player games

Samy Jelassi, Carles Domingo-Enrich, Damien Scieur, Arthur Mensch, Joan Bruna
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:4736-4745, 2020.

Abstract

Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e.g. when training GANs. In this paper, we analyse a new extra-gradient method for Nash equilibrium finding, that performs gradient extrapolations and updates on a random subset of players at each iteration. This approach provably exhibits a better rate of convergence than full extra-gradient for non-smooth convex games with noisy gradient oracle. We propose an additional variance reduction mechanism to obtain speed-ups in smooth convex games. Our approach makes extrapolation amenable to massive multiplayer settings, and brings empirical speed-ups, in particular when using a heuristic cyclic sampling scheme. Most importantly, it allows to train faster and better GANs and mixtures of GANs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-jelassi20a, title = {Extra-gradient with player sampling for faster convergence in n-player games}, author = {Jelassi, Samy and Domingo-Enrich, Carles and Scieur, Damien and Mensch, Arthur and Bruna, Joan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {4736--4745}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/jelassi20a/jelassi20a.pdf}, url = {https://proceedings.mlr.press/v119/jelassi20a.html}, abstract = {Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e.g. when training GANs. In this paper, we analyse a new extra-gradient method for Nash equilibrium finding, that performs gradient extrapolations and updates on a random subset of players at each iteration. This approach provably exhibits a better rate of convergence than full extra-gradient for non-smooth convex games with noisy gradient oracle. We propose an additional variance reduction mechanism to obtain speed-ups in smooth convex games. Our approach makes extrapolation amenable to massive multiplayer settings, and brings empirical speed-ups, in particular when using a heuristic cyclic sampling scheme. Most importantly, it allows to train faster and better GANs and mixtures of GANs.} }
Endnote
%0 Conference Paper %T Extra-gradient with player sampling for faster convergence in n-player games %A Samy Jelassi %A Carles Domingo-Enrich %A Damien Scieur %A Arthur Mensch %A Joan Bruna %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-jelassi20a %I PMLR %P 4736--4745 %U https://proceedings.mlr.press/v119/jelassi20a.html %V 119 %X Data-driven modeling increasingly requires to find a Nash equilibrium in multi-player games, e.g. when training GANs. In this paper, we analyse a new extra-gradient method for Nash equilibrium finding, that performs gradient extrapolations and updates on a random subset of players at each iteration. This approach provably exhibits a better rate of convergence than full extra-gradient for non-smooth convex games with noisy gradient oracle. We propose an additional variance reduction mechanism to obtain speed-ups in smooth convex games. Our approach makes extrapolation amenable to massive multiplayer settings, and brings empirical speed-ups, in particular when using a heuristic cyclic sampling scheme. Most importantly, it allows to train faster and better GANs and mixtures of GANs.
APA
Jelassi, S., Domingo-Enrich, C., Scieur, D., Mensch, A. & Bruna, J.. (2020). Extra-gradient with player sampling for faster convergence in n-player games. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:4736-4745 Available from https://proceedings.mlr.press/v119/jelassi20a.html.

Related Material