Diffusion bridges vector quantized variational autoencoders

Max Cohen, Guillaume Quispe, Sylvain Le Corff, Charles Ollion, Eric Moulines
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:4141-4156, 2022.

Abstract

Vector Quantized-Variational AutoEncoders (VQ-VAE) are generative models based on discrete latent representations of the data, where inputs are mapped to a finite set of learned embeddings. To generate new samples, an autoregressive prior distribution over the discrete states must be trained separately. This prior is generally very complex and leads to slow generation. In this work, we propose a new model to train the prior and the encoder/decoder networks simultaneously. We build a diffusion bridge between a continuous coded vector and a non-informative prior distribution. The latent discrete states are then given as random functions of these continuous vectors. We show that our model is competitive with the autoregressive prior on the mini-Imagenet and CIFAR dataset and is efficient in both optimization and sampling. Our framework also extends the standard VQ-VAE and enables end-to-end training.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-cohen22b, title = {Diffusion bridges vector quantized variational autoencoders}, author = {Cohen, Max and Quispe, Guillaume and Corff, Sylvain Le and Ollion, Charles and Moulines, Eric}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {4141--4156}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/cohen22b/cohen22b.pdf}, url = {https://proceedings.mlr.press/v162/cohen22b.html}, abstract = {Vector Quantized-Variational AutoEncoders (VQ-VAE) are generative models based on discrete latent representations of the data, where inputs are mapped to a finite set of learned embeddings. To generate new samples, an autoregressive prior distribution over the discrete states must be trained separately. This prior is generally very complex and leads to slow generation. In this work, we propose a new model to train the prior and the encoder/decoder networks simultaneously. We build a diffusion bridge between a continuous coded vector and a non-informative prior distribution. The latent discrete states are then given as random functions of these continuous vectors. We show that our model is competitive with the autoregressive prior on the mini-Imagenet and CIFAR dataset and is efficient in both optimization and sampling. Our framework also extends the standard VQ-VAE and enables end-to-end training.} }
Endnote
%0 Conference Paper %T Diffusion bridges vector quantized variational autoencoders %A Max Cohen %A Guillaume Quispe %A Sylvain Le Corff %A Charles Ollion %A Eric Moulines %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-cohen22b %I PMLR %P 4141--4156 %U https://proceedings.mlr.press/v162/cohen22b.html %V 162 %X Vector Quantized-Variational AutoEncoders (VQ-VAE) are generative models based on discrete latent representations of the data, where inputs are mapped to a finite set of learned embeddings. To generate new samples, an autoregressive prior distribution over the discrete states must be trained separately. This prior is generally very complex and leads to slow generation. In this work, we propose a new model to train the prior and the encoder/decoder networks simultaneously. We build a diffusion bridge between a continuous coded vector and a non-informative prior distribution. The latent discrete states are then given as random functions of these continuous vectors. We show that our model is competitive with the autoregressive prior on the mini-Imagenet and CIFAR dataset and is efficient in both optimization and sampling. Our framework also extends the standard VQ-VAE and enables end-to-end training.
APA
Cohen, M., Quispe, G., Corff, S.L., Ollion, C. & Moulines, E.. (2022). Diffusion bridges vector quantized variational autoencoders. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:4141-4156 Available from https://proceedings.mlr.press/v162/cohen22b.html.

Related Material