Inverse Problem Sampling in Latent Space Using Sequential Monte Carlo

Idan Achituve, Hai Victor Habi, Amir Rosenfeld, Arnon Netzer, Idit Diamant, Ethan Fetaya
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:420-443, 2025.

Abstract

In image processing, solving inverse problems is the task of finding plausible reconstructions of an image that was corrupted by some (usually known) degradation operator. Commonly, this process is done using a generative image model that can guide the reconstruction towards solutions that appear natural. The success of diffusion models over the last few years has made them a leading candidate for this task. However, the sequential nature of diffusion models makes this conditional sampling process challenging. Furthermore, since diffusion models are often defined in the latent space of an autoencoder, the encoder-decoder transformations introduce additional difficulties. To address these challenges, we suggest a novel sampling method based on sequential Monte Carlo (SMC) in the latent space of diffusion models. We name our method LD-SMC. We define a generative model for the data using additional auxiliary observations and perform posterior inference with SMC sampling based on a backward diffusion process. Empirical evaluations on ImageNet and FFHQ show the benefits of LD-SMC over competing methods in various inverse problem tasks and especially in challenging inpainting tasks.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-achituve25a, title = {Inverse Problem Sampling in Latent Space Using Sequential {M}onte {C}arlo}, author = {Achituve, Idan and Habi, Hai Victor and Rosenfeld, Amir and Netzer, Arnon and Diamant, Idit and Fetaya, Ethan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {420--443}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/achituve25a/achituve25a.pdf}, url = {https://proceedings.mlr.press/v267/achituve25a.html}, abstract = {In image processing, solving inverse problems is the task of finding plausible reconstructions of an image that was corrupted by some (usually known) degradation operator. Commonly, this process is done using a generative image model that can guide the reconstruction towards solutions that appear natural. The success of diffusion models over the last few years has made them a leading candidate for this task. However, the sequential nature of diffusion models makes this conditional sampling process challenging. Furthermore, since diffusion models are often defined in the latent space of an autoencoder, the encoder-decoder transformations introduce additional difficulties. To address these challenges, we suggest a novel sampling method based on sequential Monte Carlo (SMC) in the latent space of diffusion models. We name our method LD-SMC. We define a generative model for the data using additional auxiliary observations and perform posterior inference with SMC sampling based on a backward diffusion process. Empirical evaluations on ImageNet and FFHQ show the benefits of LD-SMC over competing methods in various inverse problem tasks and especially in challenging inpainting tasks.} }
Endnote
%0 Conference Paper %T Inverse Problem Sampling in Latent Space Using Sequential Monte Carlo %A Idan Achituve %A Hai Victor Habi %A Amir Rosenfeld %A Arnon Netzer %A Idit Diamant %A Ethan Fetaya %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-achituve25a %I PMLR %P 420--443 %U https://proceedings.mlr.press/v267/achituve25a.html %V 267 %X In image processing, solving inverse problems is the task of finding plausible reconstructions of an image that was corrupted by some (usually known) degradation operator. Commonly, this process is done using a generative image model that can guide the reconstruction towards solutions that appear natural. The success of diffusion models over the last few years has made them a leading candidate for this task. However, the sequential nature of diffusion models makes this conditional sampling process challenging. Furthermore, since diffusion models are often defined in the latent space of an autoencoder, the encoder-decoder transformations introduce additional difficulties. To address these challenges, we suggest a novel sampling method based on sequential Monte Carlo (SMC) in the latent space of diffusion models. We name our method LD-SMC. We define a generative model for the data using additional auxiliary observations and perform posterior inference with SMC sampling based on a backward diffusion process. Empirical evaluations on ImageNet and FFHQ show the benefits of LD-SMC over competing methods in various inverse problem tasks and especially in challenging inpainting tasks.
APA
Achituve, I., Habi, H.V., Rosenfeld, A., Netzer, A., Diamant, I. & Fetaya, E.. (2025). Inverse Problem Sampling in Latent Space Using Sequential Monte Carlo. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:420-443 Available from https://proceedings.mlr.press/v267/achituve25a.html.

Related Material