SGD Learns One-Layer Networks in WGANs

Qi Lei, Jason Lee, Alex Dimakis, Constantinos Daskalakis
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:5799-5808, 2020.

Abstract

Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-lei20b, title = {{SGD} Learns One-Layer Networks in {WGAN}s}, author = {Lei, Qi and Lee, Jason and Dimakis, Alex and Daskalakis, Constantinos}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {5799--5808}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/lei20b/lei20b.pdf}, url = {https://proceedings.mlr.press/v119/lei20b.html}, abstract = {Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.} }
Endnote
%0 Conference Paper %T SGD Learns One-Layer Networks in WGANs %A Qi Lei %A Jason Lee %A Alex Dimakis %A Constantinos Daskalakis %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-lei20b %I PMLR %P 5799--5808 %U https://proceedings.mlr.press/v119/lei20b.html %V 119 %X Generative adversarial networks (GANs) are a widely used framework for learning generative models. Wasserstein GANs (WGANs), one of the most successful variants of GANs, require solving a minmax optimization problem to global optimality, but are in practice successfully trained using stochastic gradient descent-ascent. In this paper, we show that, when the generator is a one-layer network, stochastic gradient descent-ascent converges to a global solution with polynomial time and sample complexity.
APA
Lei, Q., Lee, J., Dimakis, A. & Daskalakis, C.. (2020). SGD Learns One-Layer Networks in WGANs. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:5799-5808 Available from https://proceedings.mlr.press/v119/lei20b.html.

Related Material