Compressed Image Generation with Denoising Diffusion Codebook Models

Guy Ohayon, Hila Manor, Tomer Michaeli, Michael Elad
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:47044-47089, 2025.

Abstract

We present a novel generative approach based on Denoising Diffusion Models (DDMs), which produces high-quality image samples along with their losslessly compressed bit-stream representations. This is obtained by replacing the standard Gaussian noise sampling in the reverse diffusion with a selection of noise samples from pre-defined codebooks of fixed iid Gaussian vectors. Surprisingly, we find that our method, termed Denoising Diffusion Codebook Model (DDCM), retains sample quality and diversity of standard DDMs, even for extremely small codebooks. We leverage DDCM and pick the noises from the codebooks that best match a given image, converting our generative model into a highly effective lossy image codec achieving state-of-the-art perceptual image compression results. More generally, by setting other noise selections rules, we extend our compression method to any conditional image generation task (e.g., image restoration), where the generated images are produced jointly with their condensed bit-stream representations. Our work is accompanied by a mathematical interpretation of the proposed compressed conditional generation schemes, establishing a connection with score-based approximations of posterior samplers for the tasks considered. Code and demo are available on our project’s website.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-ohayon25a, title = {Compressed Image Generation with Denoising Diffusion Codebook Models}, author = {Ohayon, Guy and Manor, Hila and Michaeli, Tomer and Elad, Michael}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {47044--47089}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/ohayon25a/ohayon25a.pdf}, url = {https://proceedings.mlr.press/v267/ohayon25a.html}, abstract = {We present a novel generative approach based on Denoising Diffusion Models (DDMs), which produces high-quality image samples along with their losslessly compressed bit-stream representations. This is obtained by replacing the standard Gaussian noise sampling in the reverse diffusion with a selection of noise samples from pre-defined codebooks of fixed iid Gaussian vectors. Surprisingly, we find that our method, termed Denoising Diffusion Codebook Model (DDCM), retains sample quality and diversity of standard DDMs, even for extremely small codebooks. We leverage DDCM and pick the noises from the codebooks that best match a given image, converting our generative model into a highly effective lossy image codec achieving state-of-the-art perceptual image compression results. More generally, by setting other noise selections rules, we extend our compression method to any conditional image generation task (e.g., image restoration), where the generated images are produced jointly with their condensed bit-stream representations. Our work is accompanied by a mathematical interpretation of the proposed compressed conditional generation schemes, establishing a connection with score-based approximations of posterior samplers for the tasks considered. Code and demo are available on our project’s website.} }
Endnote
%0 Conference Paper %T Compressed Image Generation with Denoising Diffusion Codebook Models %A Guy Ohayon %A Hila Manor %A Tomer Michaeli %A Michael Elad %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-ohayon25a %I PMLR %P 47044--47089 %U https://proceedings.mlr.press/v267/ohayon25a.html %V 267 %X We present a novel generative approach based on Denoising Diffusion Models (DDMs), which produces high-quality image samples along with their losslessly compressed bit-stream representations. This is obtained by replacing the standard Gaussian noise sampling in the reverse diffusion with a selection of noise samples from pre-defined codebooks of fixed iid Gaussian vectors. Surprisingly, we find that our method, termed Denoising Diffusion Codebook Model (DDCM), retains sample quality and diversity of standard DDMs, even for extremely small codebooks. We leverage DDCM and pick the noises from the codebooks that best match a given image, converting our generative model into a highly effective lossy image codec achieving state-of-the-art perceptual image compression results. More generally, by setting other noise selections rules, we extend our compression method to any conditional image generation task (e.g., image restoration), where the generated images are produced jointly with their condensed bit-stream representations. Our work is accompanied by a mathematical interpretation of the proposed compressed conditional generation schemes, establishing a connection with score-based approximations of posterior samplers for the tasks considered. Code and demo are available on our project’s website.
APA
Ohayon, G., Manor, H., Michaeli, T. & Elad, M.. (2025). Compressed Image Generation with Denoising Diffusion Codebook Models. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:47044-47089 Available from https://proceedings.mlr.press/v267/ohayon25a.html.

Related Material