Simulation-Free Differential Dynamics Through Neural Conservation Laws

Mengjian Hua, Eric Vanden-Eijnden, Ricky T. Q. Chen
Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, PMLR 286:1730-1744, 2025.

Abstract

We present a novel simulation-free framework for training continuous-time diffusion processes over very general objective functions. Existing methods typically involve either prescribing the optimal diffusion process—which only works for heavily restricted problem formulations—or require expensive simulation to numerically obtain the time-dependent densities and sample from the diffusion process. In contrast, we propose a coupled parameterization which jointly models a time-dependent density function, or probability path, and the dynamics of a diffusion process that generates this probability path. To accomplish this, our approach directly bakes in the Fokker-Planck equation and density function requirements as hard constraints, by extending and greatly simplifying the construction of Neural Conservation Laws. This enables simulation-free training for a large variety of problem formulations, from data-driven objectives as in generative modeling and dynamical optimal transport, to optimality-based objectives as in stochastic optimal control, with straightforward extensions to mean-field objectives due to the ease of accessing exact density functions. We validate our method in a diverse range of application domains from modeling spatio-temporal events, to learning optimal dynamics from population data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v286-hua25a, title = {Simulation-Free Differential Dynamics Through Neural Conservation Laws}, author = {Hua, Mengjian and Vanden-Eijnden, Eric and Chen, Ricky T. Q.}, booktitle = {Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence}, pages = {1730--1744}, year = {2025}, editor = {Chiappa, Silvia and Magliacane, Sara}, volume = {286}, series = {Proceedings of Machine Learning Research}, month = {21--25 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v286/main/assets/hua25a/hua25a.pdf}, url = {https://proceedings.mlr.press/v286/hua25a.html}, abstract = {We present a novel simulation-free framework for training continuous-time diffusion processes over very general objective functions. Existing methods typically involve either prescribing the optimal diffusion process—which only works for heavily restricted problem formulations—or require expensive simulation to numerically obtain the time-dependent densities and sample from the diffusion process. In contrast, we propose a coupled parameterization which jointly models a time-dependent density function, or probability path, and the dynamics of a diffusion process that generates this probability path. To accomplish this, our approach directly bakes in the Fokker-Planck equation and density function requirements as hard constraints, by extending and greatly simplifying the construction of Neural Conservation Laws. This enables simulation-free training for a large variety of problem formulations, from data-driven objectives as in generative modeling and dynamical optimal transport, to optimality-based objectives as in stochastic optimal control, with straightforward extensions to mean-field objectives due to the ease of accessing exact density functions. We validate our method in a diverse range of application domains from modeling spatio-temporal events, to learning optimal dynamics from population data.} }
Endnote
%0 Conference Paper %T Simulation-Free Differential Dynamics Through Neural Conservation Laws %A Mengjian Hua %A Eric Vanden-Eijnden %A Ricky T. Q. Chen %B Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2025 %E Silvia Chiappa %E Sara Magliacane %F pmlr-v286-hua25a %I PMLR %P 1730--1744 %U https://proceedings.mlr.press/v286/hua25a.html %V 286 %X We present a novel simulation-free framework for training continuous-time diffusion processes over very general objective functions. Existing methods typically involve either prescribing the optimal diffusion process—which only works for heavily restricted problem formulations—or require expensive simulation to numerically obtain the time-dependent densities and sample from the diffusion process. In contrast, we propose a coupled parameterization which jointly models a time-dependent density function, or probability path, and the dynamics of a diffusion process that generates this probability path. To accomplish this, our approach directly bakes in the Fokker-Planck equation and density function requirements as hard constraints, by extending and greatly simplifying the construction of Neural Conservation Laws. This enables simulation-free training for a large variety of problem formulations, from data-driven objectives as in generative modeling and dynamical optimal transport, to optimality-based objectives as in stochastic optimal control, with straightforward extensions to mean-field objectives due to the ease of accessing exact density functions. We validate our method in a diverse range of application domains from modeling spatio-temporal events, to learning optimal dynamics from population data.
APA
Hua, M., Vanden-Eijnden, E. & Chen, R.T.Q.. (2025). Simulation-Free Differential Dynamics Through Neural Conservation Laws. Proceedings of the Forty-first Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 286:1730-1744 Available from https://proceedings.mlr.press/v286/hua25a.html.

Related Material