Neural Guided Diffusion Bridges

Gefan Yang, Frank Van Der Meulen, Stefan Sommer
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:71210-71230, 2025.

Abstract

We propose a novel method for simulating conditioned diffusion processes (diffusion bridges) in Euclidean spaces. By training a neural network to approximate bridge dynamics, our approach eliminates the need for computationally intensive Markov Chain Monte Carlo (MCMC) methods or reverse-process modeling. Compared to existing methods, it offers greater robustness across various diffusion specifications and conditioning scenarios. This applies in particular to rare events and multimodal distributions, which pose challenges for score-learning- and MCMC-based approaches. We propose a flexible variational family for approximating the diffusion bridge path measure which is partially specified by a neural network. Once trained, it enables efficient independent sampling at a cost comparable to sampling the unconditioned (forward) process.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-yang25af, title = {Neural Guided Diffusion Bridges}, author = {Yang, Gefan and Van Der Meulen, Frank and Sommer, Stefan}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {71210--71230}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/yang25af/yang25af.pdf}, url = {https://proceedings.mlr.press/v267/yang25af.html}, abstract = {We propose a novel method for simulating conditioned diffusion processes (diffusion bridges) in Euclidean spaces. By training a neural network to approximate bridge dynamics, our approach eliminates the need for computationally intensive Markov Chain Monte Carlo (MCMC) methods or reverse-process modeling. Compared to existing methods, it offers greater robustness across various diffusion specifications and conditioning scenarios. This applies in particular to rare events and multimodal distributions, which pose challenges for score-learning- and MCMC-based approaches. We propose a flexible variational family for approximating the diffusion bridge path measure which is partially specified by a neural network. Once trained, it enables efficient independent sampling at a cost comparable to sampling the unconditioned (forward) process.} }
Endnote
%0 Conference Paper %T Neural Guided Diffusion Bridges %A Gefan Yang %A Frank Van Der Meulen %A Stefan Sommer %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-yang25af %I PMLR %P 71210--71230 %U https://proceedings.mlr.press/v267/yang25af.html %V 267 %X We propose a novel method for simulating conditioned diffusion processes (diffusion bridges) in Euclidean spaces. By training a neural network to approximate bridge dynamics, our approach eliminates the need for computationally intensive Markov Chain Monte Carlo (MCMC) methods or reverse-process modeling. Compared to existing methods, it offers greater robustness across various diffusion specifications and conditioning scenarios. This applies in particular to rare events and multimodal distributions, which pose challenges for score-learning- and MCMC-based approaches. We propose a flexible variational family for approximating the diffusion bridge path measure which is partially specified by a neural network. Once trained, it enables efficient independent sampling at a cost comparable to sampling the unconditioned (forward) process.
APA
Yang, G., Van Der Meulen, F. & Sommer, S.. (2025). Neural Guided Diffusion Bridges. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:71210-71230 Available from https://proceedings.mlr.press/v267/yang25af.html.

Related Material