Listening to the noise: Blind Denoising with Gibbs Diffusion

David Heurtel-Depeiges, Charles Margossian, Ruben Ohana, Bruno Régaldo-Saint Blancard
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:18284-18304, 2024.

Abstract

In recent years, denoising problems have become intertwined with the development of deep generative models. In particular, diffusion models are trained like denoisers, and the distribution they model coincide with denoising priors in the Bayesian picture. However, denoising through diffusion-based posterior sampling requires the noise level and covariance to be known, preventing blind denoising. We overcome this limitation by introducing Gibbs Diffusion (GDiff), a general methodology addressing posterior sampling of both the signal and the noise parameters. Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the class of noise distributions, and a Monte Carlo sampler to infer the noise parameters. Our theoretical analysis highlights potential pitfalls, guides diagnostic usage, and quantifies errors in the Gibbs stationary distribution caused by the diffusion model. We showcase our method for 1) blind denoising of natural images involving colored noises with unknown amplitude and exponent, and 2) a cosmology problem, namely the analysis of cosmic microwave background data, where Bayesian inference of "noise" parameters means constraining models of the evolution of the Universe.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-heurtel-depeiges24a, title = {Listening to the noise: Blind Denoising with {G}ibbs Diffusion}, author = {Heurtel-Depeiges, David and Margossian, Charles and Ohana, Ruben and R\'{e}galdo-Saint Blancard, Bruno}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {18284--18304}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/heurtel-depeiges24a/heurtel-depeiges24a.pdf}, url = {https://proceedings.mlr.press/v235/heurtel-depeiges24a.html}, abstract = {In recent years, denoising problems have become intertwined with the development of deep generative models. In particular, diffusion models are trained like denoisers, and the distribution they model coincide with denoising priors in the Bayesian picture. However, denoising through diffusion-based posterior sampling requires the noise level and covariance to be known, preventing blind denoising. We overcome this limitation by introducing Gibbs Diffusion (GDiff), a general methodology addressing posterior sampling of both the signal and the noise parameters. Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the class of noise distributions, and a Monte Carlo sampler to infer the noise parameters. Our theoretical analysis highlights potential pitfalls, guides diagnostic usage, and quantifies errors in the Gibbs stationary distribution caused by the diffusion model. We showcase our method for 1) blind denoising of natural images involving colored noises with unknown amplitude and exponent, and 2) a cosmology problem, namely the analysis of cosmic microwave background data, where Bayesian inference of "noise" parameters means constraining models of the evolution of the Universe.} }
Endnote
%0 Conference Paper %T Listening to the noise: Blind Denoising with Gibbs Diffusion %A David Heurtel-Depeiges %A Charles Margossian %A Ruben Ohana %A Bruno Régaldo-Saint Blancard %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-heurtel-depeiges24a %I PMLR %P 18284--18304 %U https://proceedings.mlr.press/v235/heurtel-depeiges24a.html %V 235 %X In recent years, denoising problems have become intertwined with the development of deep generative models. In particular, diffusion models are trained like denoisers, and the distribution they model coincide with denoising priors in the Bayesian picture. However, denoising through diffusion-based posterior sampling requires the noise level and covariance to be known, preventing blind denoising. We overcome this limitation by introducing Gibbs Diffusion (GDiff), a general methodology addressing posterior sampling of both the signal and the noise parameters. Assuming arbitrary parametric Gaussian noise, we develop a Gibbs algorithm that alternates sampling steps from a conditional diffusion model trained to map the signal prior to the class of noise distributions, and a Monte Carlo sampler to infer the noise parameters. Our theoretical analysis highlights potential pitfalls, guides diagnostic usage, and quantifies errors in the Gibbs stationary distribution caused by the diffusion model. We showcase our method for 1) blind denoising of natural images involving colored noises with unknown amplitude and exponent, and 2) a cosmology problem, namely the analysis of cosmic microwave background data, where Bayesian inference of "noise" parameters means constraining models of the evolution of the Universe.
APA
Heurtel-Depeiges, D., Margossian, C., Ohana, R. & Régaldo-Saint Blancard, B.. (2024). Listening to the noise: Blind Denoising with Gibbs Diffusion. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:18284-18304 Available from https://proceedings.mlr.press/v235/heurtel-depeiges24a.html.

Related Material