Finding Mixed Nash Equilibria of Generative Adversarial Networks

Ya-Ping Hsieh, Chen Liu, Volkan Cevher
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:2810-2819, 2019.

Abstract

Generative adversarial networks (GANs) are known to achieve the state-of-the-art performance on various generative tasks, but these results come at the expense of a notoriously difficult training phase. Current training strategies typically draw a connection to optimization theory, whose scope is restricted to local convergence due to the presence of non-convexity. In this work, we tackle the training of GANs by rethinking the problem formulation from the mixed Nash Equilibria (NE) perspective. Via a classical lifting trick, we show that essentially all existing GAN objectives can be relaxed into their mixed strategy forms, whose global optima can be solved via sampling, in contrast to the exclusive use of optimization framework in previous work. We further propose a mean-approximation sampling scheme, which allows to systematically exploit methods for bi-affine games to delineate novel, practical training algorithms of GANs. Finally, we provide experimental evidence that our approach yields comparable or superior results to contemporary training algorithms, and outperforms classical methods such as SGD, Adam, and RMSProp.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-hsieh19b, title = {Finding Mixed {N}ash Equilibria of Generative Adversarial Networks}, author = {Hsieh, Ya-Ping and Liu, Chen and Cevher, Volkan}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {2810--2819}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/hsieh19b/hsieh19b.pdf}, url = {https://proceedings.mlr.press/v97/hsieh19b.html}, abstract = {Generative adversarial networks (GANs) are known to achieve the state-of-the-art performance on various generative tasks, but these results come at the expense of a notoriously difficult training phase. Current training strategies typically draw a connection to optimization theory, whose scope is restricted to local convergence due to the presence of non-convexity. In this work, we tackle the training of GANs by rethinking the problem formulation from the mixed Nash Equilibria (NE) perspective. Via a classical lifting trick, we show that essentially all existing GAN objectives can be relaxed into their mixed strategy forms, whose global optima can be solved via sampling, in contrast to the exclusive use of optimization framework in previous work. We further propose a mean-approximation sampling scheme, which allows to systematically exploit methods for bi-affine games to delineate novel, practical training algorithms of GANs. Finally, we provide experimental evidence that our approach yields comparable or superior results to contemporary training algorithms, and outperforms classical methods such as SGD, Adam, and RMSProp.} }
Endnote
%0 Conference Paper %T Finding Mixed Nash Equilibria of Generative Adversarial Networks %A Ya-Ping Hsieh %A Chen Liu %A Volkan Cevher %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-hsieh19b %I PMLR %P 2810--2819 %U https://proceedings.mlr.press/v97/hsieh19b.html %V 97 %X Generative adversarial networks (GANs) are known to achieve the state-of-the-art performance on various generative tasks, but these results come at the expense of a notoriously difficult training phase. Current training strategies typically draw a connection to optimization theory, whose scope is restricted to local convergence due to the presence of non-convexity. In this work, we tackle the training of GANs by rethinking the problem formulation from the mixed Nash Equilibria (NE) perspective. Via a classical lifting trick, we show that essentially all existing GAN objectives can be relaxed into their mixed strategy forms, whose global optima can be solved via sampling, in contrast to the exclusive use of optimization framework in previous work. We further propose a mean-approximation sampling scheme, which allows to systematically exploit methods for bi-affine games to delineate novel, practical training algorithms of GANs. Finally, we provide experimental evidence that our approach yields comparable or superior results to contemporary training algorithms, and outperforms classical methods such as SGD, Adam, and RMSProp.
APA
Hsieh, Y., Liu, C. & Cevher, V.. (2019). Finding Mixed Nash Equilibria of Generative Adversarial Networks. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:2810-2819 Available from https://proceedings.mlr.press/v97/hsieh19b.html.

Related Material