Improving Gaussian mixture latent variable model convergence with Optimal Transport

Benoit Gaujac, Ilya Feige, David Barber
Proceedings of The 13th Asian Conference on Machine Learning, PMLR 157:737-752, 2021.

Abstract

Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets. They present, however, subtleties in training often manifesting in the discrete latent variable not being leveraged. In this paper, we show why such models struggle to train using traditional log-likelihood maximization, and that they are amenable to training using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v157-gaujac21a, title = {Improving Gaussian mixture latent variable model convergence with Optimal Transport}, author = {Gaujac, Benoit and Feige, Ilya and Barber, David}, booktitle = {Proceedings of The 13th Asian Conference on Machine Learning}, pages = {737--752}, year = {2021}, editor = {Balasubramanian, Vineeth N. and Tsang, Ivor}, volume = {157}, series = {Proceedings of Machine Learning Research}, month = {17--19 Nov}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v157/gaujac21a/gaujac21a.pdf}, url = {https://proceedings.mlr.press/v157/gaujac21a.html}, abstract = {Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets. They present, however, subtleties in training often manifesting in the discrete latent variable not being leveraged. In this paper, we show why such models struggle to train using traditional log-likelihood maximization, and that they are amenable to training using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation.} }
Endnote
%0 Conference Paper %T Improving Gaussian mixture latent variable model convergence with Optimal Transport %A Benoit Gaujac %A Ilya Feige %A David Barber %B Proceedings of The 13th Asian Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Vineeth N. Balasubramanian %E Ivor Tsang %F pmlr-v157-gaujac21a %I PMLR %P 737--752 %U https://proceedings.mlr.press/v157/gaujac21a.html %V 157 %X Generative models with both discrete and continuous latent variables are highly motivated by the structure of many real-world data sets. They present, however, subtleties in training often manifesting in the discrete latent variable not being leveraged. In this paper, we show why such models struggle to train using traditional log-likelihood maximization, and that they are amenable to training using the Optimal Transport framework of Wasserstein Autoencoders. We find our discrete latent variable to be fully leveraged by the model when trained, without any modifications to the objective function or significant fine tuning. Our model generates comparable samples to other approaches while using relatively simple neural networks, since the discrete latent variable carries much of the descriptive burden. Furthermore, the discrete latent provides significant control over generation.
APA
Gaujac, B., Feige, I. & Barber, D.. (2021). Improving Gaussian mixture latent variable model convergence with Optimal Transport. Proceedings of The 13th Asian Conference on Machine Learning, in Proceedings of Machine Learning Research 157:737-752 Available from https://proceedings.mlr.press/v157/gaujac21a.html.

Related Material