DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space

Mang Ning, Mingxiao Li, Jianlin Su, Jia Haozhe, Lanmiao Liu, Martin Benes, Wenshuo Chen, Albert Ali Salah, Itir Onal Ertugrul
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:46498-46524, 2025.

Abstract

This paper explores image modeling from the frequency space and introduces DCTdiff, an end-to-end diffusion generative paradigm that efficiently models images in the discrete cosine transform (DCT) space. We investigate the design space of DCTdiff and reveal the key design factors. Experiments on different frameworks (UViT, DiT), generation tasks, and various diffusion samplers demonstrate that DCTdiff outperforms pixel-based diffusion models regarding generative quality and training efficiency. Remarkably, DCTdiff can seamlessly scale up to 512$\times$512 resolution without using the latent diffusion paradigm and beats latent diffusion (using SD-VAE) with only 1/4 training cost. Finally, we illustrate several intriguing properties of DCT image modeling. For example, we provide a theoretical proof of why ‘image diffusion can be seen as spectral autoregression’, bridging the gap between diffusion and autoregressive models. The effectiveness of DCTdiff and the introduced properties suggest a promising direction for image modeling in the frequency space. The code is at https://github.com/forever208/DCTdiff.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-ning25c, title = {{DCT}diff: Intriguing Properties of Image Generative Modeling in the {DCT} Space}, author = {Ning, Mang and Li, Mingxiao and Su, Jianlin and Haozhe, Jia and Liu, Lanmiao and Benes, Martin and Chen, Wenshuo and Salah, Albert Ali and Onal Ertugrul, Itir}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {46498--46524}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/ning25c/ning25c.pdf}, url = {https://proceedings.mlr.press/v267/ning25c.html}, abstract = {This paper explores image modeling from the frequency space and introduces DCTdiff, an end-to-end diffusion generative paradigm that efficiently models images in the discrete cosine transform (DCT) space. We investigate the design space of DCTdiff and reveal the key design factors. Experiments on different frameworks (UViT, DiT), generation tasks, and various diffusion samplers demonstrate that DCTdiff outperforms pixel-based diffusion models regarding generative quality and training efficiency. Remarkably, DCTdiff can seamlessly scale up to 512$\times$512 resolution without using the latent diffusion paradigm and beats latent diffusion (using SD-VAE) with only 1/4 training cost. Finally, we illustrate several intriguing properties of DCT image modeling. For example, we provide a theoretical proof of why ‘image diffusion can be seen as spectral autoregression’, bridging the gap between diffusion and autoregressive models. The effectiveness of DCTdiff and the introduced properties suggest a promising direction for image modeling in the frequency space. The code is at https://github.com/forever208/DCTdiff.} }
Endnote
%0 Conference Paper %T DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space %A Mang Ning %A Mingxiao Li %A Jianlin Su %A Jia Haozhe %A Lanmiao Liu %A Martin Benes %A Wenshuo Chen %A Albert Ali Salah %A Itir Onal Ertugrul %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-ning25c %I PMLR %P 46498--46524 %U https://proceedings.mlr.press/v267/ning25c.html %V 267 %X This paper explores image modeling from the frequency space and introduces DCTdiff, an end-to-end diffusion generative paradigm that efficiently models images in the discrete cosine transform (DCT) space. We investigate the design space of DCTdiff and reveal the key design factors. Experiments on different frameworks (UViT, DiT), generation tasks, and various diffusion samplers demonstrate that DCTdiff outperforms pixel-based diffusion models regarding generative quality and training efficiency. Remarkably, DCTdiff can seamlessly scale up to 512$\times$512 resolution without using the latent diffusion paradigm and beats latent diffusion (using SD-VAE) with only 1/4 training cost. Finally, we illustrate several intriguing properties of DCT image modeling. For example, we provide a theoretical proof of why ‘image diffusion can be seen as spectral autoregression’, bridging the gap between diffusion and autoregressive models. The effectiveness of DCTdiff and the introduced properties suggest a promising direction for image modeling in the frequency space. The code is at https://github.com/forever208/DCTdiff.
APA
Ning, M., Li, M., Su, J., Haozhe, J., Liu, L., Benes, M., Chen, W., Salah, A.A. & Onal Ertugrul, I.. (2025). DCTdiff: Intriguing Properties of Image Generative Modeling in the DCT Space. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:46498-46524 Available from https://proceedings.mlr.press/v267/ning25c.html.

Related Material