Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance

Xinyu Peng, Ziyang Zheng, Wenrui Dai, Nuoqian Xiao, Chenglin Li, Junni Zou, Hongkai Xiong
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:40347-40370, 2024.

Abstract

Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems. In this paper, we reveal that recent methods can be uniformly interpreted as employing a Gaussian approximation with hand-crafted isotropic covariance for the intractable denoising posterior to approximate the conditional posterior mean. Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation. To achieve posterior covariance optimization without retraining, we provide general plug-and-play solutions based on two approaches specifically designed for leveraging pre-trained models with and without reverse covariance. We further propose a scalable method for learning posterior covariance prediction based on representation with orthonormal basis. Experimental results demonstrate that the proposed methods significantly enhance reconstruction performance without requiring hyperparameter tuning.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-peng24h, title = {Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance}, author = {Peng, Xinyu and Zheng, Ziyang and Dai, Wenrui and Xiao, Nuoqian and Li, Chenglin and Zou, Junni and Xiong, Hongkai}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {40347--40370}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/peng24h/peng24h.pdf}, url = {https://proceedings.mlr.press/v235/peng24h.html}, abstract = {Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems. In this paper, we reveal that recent methods can be uniformly interpreted as employing a Gaussian approximation with hand-crafted isotropic covariance for the intractable denoising posterior to approximate the conditional posterior mean. Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation. To achieve posterior covariance optimization without retraining, we provide general plug-and-play solutions based on two approaches specifically designed for leveraging pre-trained models with and without reverse covariance. We further propose a scalable method for learning posterior covariance prediction based on representation with orthonormal basis. Experimental results demonstrate that the proposed methods significantly enhance reconstruction performance without requiring hyperparameter tuning.} }
Endnote
%0 Conference Paper %T Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance %A Xinyu Peng %A Ziyang Zheng %A Wenrui Dai %A Nuoqian Xiao %A Chenglin Li %A Junni Zou %A Hongkai Xiong %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-peng24h %I PMLR %P 40347--40370 %U https://proceedings.mlr.press/v235/peng24h.html %V 235 %X Recent diffusion models provide a promising zero-shot solution to noisy linear inverse problems without retraining for specific inverse problems. In this paper, we reveal that recent methods can be uniformly interpreted as employing a Gaussian approximation with hand-crafted isotropic covariance for the intractable denoising posterior to approximate the conditional posterior mean. Inspired by this finding, we propose to improve recent methods by using more principled covariance determined by maximum likelihood estimation. To achieve posterior covariance optimization without retraining, we provide general plug-and-play solutions based on two approaches specifically designed for leveraging pre-trained models with and without reverse covariance. We further propose a scalable method for learning posterior covariance prediction based on representation with orthonormal basis. Experimental results demonstrate that the proposed methods significantly enhance reconstruction performance without requiring hyperparameter tuning.
APA
Peng, X., Zheng, Z., Dai, W., Xiao, N., Li, C., Zou, J. & Xiong, H.. (2024). Improving Diffusion Models for Inverse Problems Using Optimal Posterior Covariance. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:40347-40370 Available from https://proceedings.mlr.press/v235/peng24h.html.

Related Material