PixelCNN Models with Auxiliary Variables for Natural Image Modeling

Alexander Kolesnikov, Christoph H. Lampert
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:1905-1914, 2017.

Abstract

We study probabilistic models of natural images and extend the autoregressive family of PixelCNN models by incorporating auxiliary variables. Subsequently, we describe two new generative image models that exploit different image transformations as auxiliary variables: a quantized grayscale view of the image or a multi-resolution image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN models: 1) their tendency to focus on low-level image details, while largely ignoring high-level image information, such as object shapes, and 2) their computationally costly procedure for image sampling. We experimentally demonstrate benefits of our models, in particular showing that they produce much more realistically looking image samples than previous state-of-the-art probabilistic models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v70-kolesnikov17a, title = {{P}ixel{CNN} Models with Auxiliary Variables for Natural Image Modeling}, author = {Alexander Kolesnikov and Christoph H. Lampert}, booktitle = {Proceedings of the 34th International Conference on Machine Learning}, pages = {1905--1914}, year = {2017}, editor = {Precup, Doina and Teh, Yee Whye}, volume = {70}, series = {Proceedings of Machine Learning Research}, month = {06--11 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v70/kolesnikov17a/kolesnikov17a.pdf}, url = {https://proceedings.mlr.press/v70/kolesnikov17a.html}, abstract = {We study probabilistic models of natural images and extend the autoregressive family of PixelCNN models by incorporating auxiliary variables. Subsequently, we describe two new generative image models that exploit different image transformations as auxiliary variables: a quantized grayscale view of the image or a multi-resolution image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN models: 1) their tendency to focus on low-level image details, while largely ignoring high-level image information, such as object shapes, and 2) their computationally costly procedure for image sampling. We experimentally demonstrate benefits of our models, in particular showing that they produce much more realistically looking image samples than previous state-of-the-art probabilistic models.} }
Endnote
%0 Conference Paper %T PixelCNN Models with Auxiliary Variables for Natural Image Modeling %A Alexander Kolesnikov %A Christoph H. Lampert %B Proceedings of the 34th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2017 %E Doina Precup %E Yee Whye Teh %F pmlr-v70-kolesnikov17a %I PMLR %P 1905--1914 %U https://proceedings.mlr.press/v70/kolesnikov17a.html %V 70 %X We study probabilistic models of natural images and extend the autoregressive family of PixelCNN models by incorporating auxiliary variables. Subsequently, we describe two new generative image models that exploit different image transformations as auxiliary variables: a quantized grayscale view of the image or a multi-resolution image pyramid. The proposed models tackle two known shortcomings of existing PixelCNN models: 1) their tendency to focus on low-level image details, while largely ignoring high-level image information, such as object shapes, and 2) their computationally costly procedure for image sampling. We experimentally demonstrate benefits of our models, in particular showing that they produce much more realistically looking image samples than previous state-of-the-art probabilistic models.
APA
Kolesnikov, A. & Lampert, C.H.. (2017). PixelCNN Models with Auxiliary Variables for Natural Image Modeling. Proceedings of the 34th International Conference on Machine Learning, in Proceedings of Machine Learning Research 70:1905-1914 Available from https://proceedings.mlr.press/v70/kolesnikov17a.html.

Related Material