Generating images with sparse representations

Charlie Nash, Jacob Menick, Sander Dieleman, Peter Battaglia
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:7958-7968, 2021.

Abstract

The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models. Previous approaches such as VQ-VAE use deep autoencoders to obtain compact representations, which are more practical as inputs for likelihood-based models. We present an alternative approach, inspired by common image compression methods like JPEG, and convert images to quantized discrete cosine transform (DCT) blocks, which are represented sparsely as a sequence of DCT channel, spatial location, and DCT coefficient triples. We propose a Transformer-based autoregressive architecture, which is trained to sequentially predict the conditional distribution of the next element in such sequences, and which scales effectively to high resolution images. On a range of image datasets, we demonstrate that our approach can generate high quality, diverse images, with sample metric scores competitive with state of the art methods. We additionally show that simple modifications to our method yield effective image colorization and super-resolution models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-nash21a, title = {Generating images with sparse representations}, author = {Nash, Charlie and Menick, Jacob and Dieleman, Sander and Battaglia, Peter}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {7958--7968}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/nash21a/nash21a.pdf}, url = {https://proceedings.mlr.press/v139/nash21a.html}, abstract = {The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models. Previous approaches such as VQ-VAE use deep autoencoders to obtain compact representations, which are more practical as inputs for likelihood-based models. We present an alternative approach, inspired by common image compression methods like JPEG, and convert images to quantized discrete cosine transform (DCT) blocks, which are represented sparsely as a sequence of DCT channel, spatial location, and DCT coefficient triples. We propose a Transformer-based autoregressive architecture, which is trained to sequentially predict the conditional distribution of the next element in such sequences, and which scales effectively to high resolution images. On a range of image datasets, we demonstrate that our approach can generate high quality, diverse images, with sample metric scores competitive with state of the art methods. We additionally show that simple modifications to our method yield effective image colorization and super-resolution models.} }
Endnote
%0 Conference Paper %T Generating images with sparse representations %A Charlie Nash %A Jacob Menick %A Sander Dieleman %A Peter Battaglia %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-nash21a %I PMLR %P 7958--7968 %U https://proceedings.mlr.press/v139/nash21a.html %V 139 %X The high dimensionality of images presents architecture and sampling-efficiency challenges for likelihood-based generative models. Previous approaches such as VQ-VAE use deep autoencoders to obtain compact representations, which are more practical as inputs for likelihood-based models. We present an alternative approach, inspired by common image compression methods like JPEG, and convert images to quantized discrete cosine transform (DCT) blocks, which are represented sparsely as a sequence of DCT channel, spatial location, and DCT coefficient triples. We propose a Transformer-based autoregressive architecture, which is trained to sequentially predict the conditional distribution of the next element in such sequences, and which scales effectively to high resolution images. On a range of image datasets, we demonstrate that our approach can generate high quality, diverse images, with sample metric scores competitive with state of the art methods. We additionally show that simple modifications to our method yield effective image colorization and super-resolution models.
APA
Nash, C., Menick, J., Dieleman, S. & Battaglia, P.. (2021). Generating images with sparse representations. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:7958-7968 Available from https://proceedings.mlr.press/v139/nash21a.html.

Related Material