Negative Momentum for Improved Game Dynamics

Gauthier Gidel, Reyhane Askari Hemmat, Mohammad Pezeshki, Rémi Le Priol, Gabriel Huang, Simon Lacoste-Julien, Ioannis Mitliagkas
Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, PMLR 89:1802-1811, 2019.

Abstract

Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics is more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. We prove that alternating updates are more stable than simultaneous updates. Next, we show both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train saturating GANs.

Cite this Paper


BibTeX
@InProceedings{pmlr-v89-gidel19a, title = {Negative Momentum for Improved Game Dynamics}, author = {Gidel, Gauthier and Hemmat, Reyhane Askari and Pezeshki, Mohammad and Priol, R\'emi Le and Huang, Gabriel and Lacoste-Julien, Simon and Mitliagkas, Ioannis}, booktitle = {Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics}, pages = {1802--1811}, year = {2019}, editor = {Chaudhuri, Kamalika and Sugiyama, Masashi}, volume = {89}, series = {Proceedings of Machine Learning Research}, month = {16--18 Apr}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v89/gidel19a/gidel19a.pdf}, url = {https://proceedings.mlr.press/v89/gidel19a.html}, abstract = {Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics is more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. We prove that alternating updates are more stable than simultaneous updates. Next, we show both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train saturating GANs.} }
Endnote
%0 Conference Paper %T Negative Momentum for Improved Game Dynamics %A Gauthier Gidel %A Reyhane Askari Hemmat %A Mohammad Pezeshki %A Rémi Le Priol %A Gabriel Huang %A Simon Lacoste-Julien %A Ioannis Mitliagkas %B Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Masashi Sugiyama %F pmlr-v89-gidel19a %I PMLR %P 1802--1811 %U https://proceedings.mlr.press/v89/gidel19a.html %V 89 %X Games generalize the single-objective optimization paradigm by introducing different objective functions for different players. Differentiable games often proceed by simultaneous or alternating gradient updates. In machine learning, games are gaining new importance through formulations like generative adversarial networks (GANs) and actor-critic systems. However, compared to single-objective optimization, game dynamics is more complex and less understood. In this paper, we analyze gradient-based methods with momentum on simple games. We prove that alternating updates are more stable than simultaneous updates. Next, we show both theoretically and empirically that alternating gradient updates with a negative momentum term achieves convergence in a difficult toy adversarial problem, but also on the notoriously difficult to train saturating GANs.
APA
Gidel, G., Hemmat, R.A., Pezeshki, M., Priol, R.L., Huang, G., Lacoste-Julien, S. & Mitliagkas, I.. (2019). Negative Momentum for Improved Game Dynamics. Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 89:1802-1811 Available from https://proceedings.mlr.press/v89/gidel19a.html.

Related Material