Variational Schrödinger Momentum Diffusion

Kevin Rojas, Yixin Tan, Molei Tao, Yuriy Nevmyvaka, Wei Deng
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4645-4653, 2025.

Abstract

The Momentum Schr{ö}dinger Bridge (mSB) (Chen et al., 2023c) has emerged as a leading method for accelerating generative diffusion processes and reducing transport costs. However, the lack of simulation-free properties inevitably results in high training costs and affects scalability. To obtain a trade-off between transport properties and scalability, we introduce variational Schrödinger momentum diffusion (VSMD), which employs linearized forward score functions (variational scores) to eliminate the dependence on simulated forward trajectories. Our approach leverages a multivariate diffusion process with adaptively transport-optimized variational scores. Additionally, we apply a critical-damping transform to stabilize training by removing the need for score estimations for both velocity and samples. Theoretically, we prove the convergence of samples generated with optimal variational scores and momentum diffusion. Empirical results demonstrate that VSMD efficiently generates anisotropic shapes while maintaining transport efficacy, outperforming overdamped alternatives, and avoiding complex denoising processes. Our approach also scales effectively to real-world data, achieving competitive results in time series and image generation, both in unconditional and conditional settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-rojas25a, title = {Variational Schrödinger Momentum Diffusion}, author = {Rojas, Kevin and Tan, Yixin and Tao, Molei and Nevmyvaka, Yuriy and Deng, Wei}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4645--4653}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/rojas25a/rojas25a.pdf}, url = {https://proceedings.mlr.press/v258/rojas25a.html}, abstract = {The Momentum Schr{ö}dinger Bridge (mSB) (Chen et al., 2023c) has emerged as a leading method for accelerating generative diffusion processes and reducing transport costs. However, the lack of simulation-free properties inevitably results in high training costs and affects scalability. To obtain a trade-off between transport properties and scalability, we introduce variational Schrödinger momentum diffusion (VSMD), which employs linearized forward score functions (variational scores) to eliminate the dependence on simulated forward trajectories. Our approach leverages a multivariate diffusion process with adaptively transport-optimized variational scores. Additionally, we apply a critical-damping transform to stabilize training by removing the need for score estimations for both velocity and samples. Theoretically, we prove the convergence of samples generated with optimal variational scores and momentum diffusion. Empirical results demonstrate that VSMD efficiently generates anisotropic shapes while maintaining transport efficacy, outperforming overdamped alternatives, and avoiding complex denoising processes. Our approach also scales effectively to real-world data, achieving competitive results in time series and image generation, both in unconditional and conditional settings.} }
Endnote
%0 Conference Paper %T Variational Schrödinger Momentum Diffusion %A Kevin Rojas %A Yixin Tan %A Molei Tao %A Yuriy Nevmyvaka %A Wei Deng %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-rojas25a %I PMLR %P 4645--4653 %U https://proceedings.mlr.press/v258/rojas25a.html %V 258 %X The Momentum Schr{ö}dinger Bridge (mSB) (Chen et al., 2023c) has emerged as a leading method for accelerating generative diffusion processes and reducing transport costs. However, the lack of simulation-free properties inevitably results in high training costs and affects scalability. To obtain a trade-off between transport properties and scalability, we introduce variational Schrödinger momentum diffusion (VSMD), which employs linearized forward score functions (variational scores) to eliminate the dependence on simulated forward trajectories. Our approach leverages a multivariate diffusion process with adaptively transport-optimized variational scores. Additionally, we apply a critical-damping transform to stabilize training by removing the need for score estimations for both velocity and samples. Theoretically, we prove the convergence of samples generated with optimal variational scores and momentum diffusion. Empirical results demonstrate that VSMD efficiently generates anisotropic shapes while maintaining transport efficacy, outperforming overdamped alternatives, and avoiding complex denoising processes. Our approach also scales effectively to real-world data, achieving competitive results in time series and image generation, both in unconditional and conditional settings.
APA
Rojas, K., Tan, Y., Tao, M., Nevmyvaka, Y. & Deng, W.. (2025). Variational Schrödinger Momentum Diffusion. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4645-4653 Available from https://proceedings.mlr.press/v258/rojas25a.html.

Related Material