Learning Implicit Generative Models with the Method of Learned Moments

Suman Ravuri, Shakir Mohamed, Mihaela Rosca, Oriol Vinyals
Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4314-4323, 2018.

Abstract

We propose a method of moments (MoM) algorithm for training large-scale implicit generative models. Moment estimation in this setting encounters two problems: it is often difficult to define the millions of moments needed to learn the model parameters, and it is hard to determine which properties are useful when specifying moments. To address the first issue, we introduce a moment network, and define the moments as the network’s hidden units and the gradient of the network’s output with respect to its parameters. To tackle the second problem, we use asymptotic theory to highlight desiderata for moments – namely they should minimize the asymptotic variance of estimated model parameters – and introduce an objective to learn better moments. The sequence of objectives created by this Method of Learned Moments (MoLM) can train high-quality neural image samplers. On CIFAR-10, we demonstrate that MoLM-trained generators achieve significantly higher Inception Scores and lower Frechet Inception Distances than those trained with gradient penalty-regularized and spectrally-normalized adversarial objectives. These generators also achieve nearly perfect Multi-Scale Structural Similarity Scores on CelebA, and can create high-quality samples of 128x128 images.

Cite this Paper


BibTeX
@InProceedings{pmlr-v80-ravuri18a, title = {Learning Implicit Generative Models with the Method of Learned Moments}, author = {Ravuri, Suman and Mohamed, Shakir and Rosca, Mihaela and Vinyals, Oriol}, booktitle = {Proceedings of the 35th International Conference on Machine Learning}, pages = {4314--4323}, year = {2018}, editor = {Dy, Jennifer and Krause, Andreas}, volume = {80}, series = {Proceedings of Machine Learning Research}, month = {10--15 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v80/ravuri18a/ravuri18a.pdf}, url = {https://proceedings.mlr.press/v80/ravuri18a.html}, abstract = {We propose a method of moments (MoM) algorithm for training large-scale implicit generative models. Moment estimation in this setting encounters two problems: it is often difficult to define the millions of moments needed to learn the model parameters, and it is hard to determine which properties are useful when specifying moments. To address the first issue, we introduce a moment network, and define the moments as the network’s hidden units and the gradient of the network’s output with respect to its parameters. To tackle the second problem, we use asymptotic theory to highlight desiderata for moments – namely they should minimize the asymptotic variance of estimated model parameters – and introduce an objective to learn better moments. The sequence of objectives created by this Method of Learned Moments (MoLM) can train high-quality neural image samplers. On CIFAR-10, we demonstrate that MoLM-trained generators achieve significantly higher Inception Scores and lower Frechet Inception Distances than those trained with gradient penalty-regularized and spectrally-normalized adversarial objectives. These generators also achieve nearly perfect Multi-Scale Structural Similarity Scores on CelebA, and can create high-quality samples of 128x128 images.} }
Endnote
%0 Conference Paper %T Learning Implicit Generative Models with the Method of Learned Moments %A Suman Ravuri %A Shakir Mohamed %A Mihaela Rosca %A Oriol Vinyals %B Proceedings of the 35th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2018 %E Jennifer Dy %E Andreas Krause %F pmlr-v80-ravuri18a %I PMLR %P 4314--4323 %U https://proceedings.mlr.press/v80/ravuri18a.html %V 80 %X We propose a method of moments (MoM) algorithm for training large-scale implicit generative models. Moment estimation in this setting encounters two problems: it is often difficult to define the millions of moments needed to learn the model parameters, and it is hard to determine which properties are useful when specifying moments. To address the first issue, we introduce a moment network, and define the moments as the network’s hidden units and the gradient of the network’s output with respect to its parameters. To tackle the second problem, we use asymptotic theory to highlight desiderata for moments – namely they should minimize the asymptotic variance of estimated model parameters – and introduce an objective to learn better moments. The sequence of objectives created by this Method of Learned Moments (MoLM) can train high-quality neural image samplers. On CIFAR-10, we demonstrate that MoLM-trained generators achieve significantly higher Inception Scores and lower Frechet Inception Distances than those trained with gradient penalty-regularized and spectrally-normalized adversarial objectives. These generators also achieve nearly perfect Multi-Scale Structural Similarity Scores on CelebA, and can create high-quality samples of 128x128 images.
APA
Ravuri, S., Mohamed, S., Rosca, M. & Vinyals, O.. (2018). Learning Implicit Generative Models with the Method of Learned Moments. Proceedings of the 35th International Conference on Machine Learning, in Proceedings of Machine Learning Research 80:4314-4323 Available from https://proceedings.mlr.press/v80/ravuri18a.html.

Related Material