Finite-Time Analysis of Discrete-Time Stochastic Interpolants

Yuhao Liu, Yu Chen, Rui Hu, Longbo Huang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:38119-38143, 2025.

Abstract

The stochastic interpolant framework offers a powerful approach for constructing generative models based on ordinary differential equations (ODEs) or stochastic differential equations (SDEs) to transform arbitrary data distributions. However, prior analyses of this framework have primarily focused on the continuous-time setting, assuming perfect solution of the underlying equations. In this work, we present the first discrete-time analysis of the stochastic interpolant framework, where we introduce a innovative discrete-time sampler and derive a finite-time upper bound on its distribution estimation error. Our result provides a novel quantification on how different factors, including the distance between source and target distributions and estimation accuracy, affect the convergence rate and also offers a new principled way to design efficient schedule for convergence acceleration. Finally, numerical experiments are conducted on the discrete-time sampler to corroborate our theoretical findings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-liu25d, title = {Finite-Time Analysis of Discrete-Time Stochastic Interpolants}, author = {Liu, Yuhao and Chen, Yu and Hu, Rui and Huang, Longbo}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {38119--38143}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/liu25d/liu25d.pdf}, url = {https://proceedings.mlr.press/v267/liu25d.html}, abstract = {The stochastic interpolant framework offers a powerful approach for constructing generative models based on ordinary differential equations (ODEs) or stochastic differential equations (SDEs) to transform arbitrary data distributions. However, prior analyses of this framework have primarily focused on the continuous-time setting, assuming perfect solution of the underlying equations. In this work, we present the first discrete-time analysis of the stochastic interpolant framework, where we introduce a innovative discrete-time sampler and derive a finite-time upper bound on its distribution estimation error. Our result provides a novel quantification on how different factors, including the distance between source and target distributions and estimation accuracy, affect the convergence rate and also offers a new principled way to design efficient schedule for convergence acceleration. Finally, numerical experiments are conducted on the discrete-time sampler to corroborate our theoretical findings.} }
Endnote
%0 Conference Paper %T Finite-Time Analysis of Discrete-Time Stochastic Interpolants %A Yuhao Liu %A Yu Chen %A Rui Hu %A Longbo Huang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-liu25d %I PMLR %P 38119--38143 %U https://proceedings.mlr.press/v267/liu25d.html %V 267 %X The stochastic interpolant framework offers a powerful approach for constructing generative models based on ordinary differential equations (ODEs) or stochastic differential equations (SDEs) to transform arbitrary data distributions. However, prior analyses of this framework have primarily focused on the continuous-time setting, assuming perfect solution of the underlying equations. In this work, we present the first discrete-time analysis of the stochastic interpolant framework, where we introduce a innovative discrete-time sampler and derive a finite-time upper bound on its distribution estimation error. Our result provides a novel quantification on how different factors, including the distance between source and target distributions and estimation accuracy, affect the convergence rate and also offers a new principled way to design efficient schedule for convergence acceleration. Finally, numerical experiments are conducted on the discrete-time sampler to corroborate our theoretical findings.
APA
Liu, Y., Chen, Y., Hu, R. & Huang, L.. (2025). Finite-Time Analysis of Discrete-Time Stochastic Interpolants. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:38119-38143 Available from https://proceedings.mlr.press/v267/liu25d.html.

Related Material