Linear Convergence of Diffusion Models Under the Manifold Hypothesis

Peter Potaptchik, Iskander Azangulov, George Deligiannidis
Proceedings of Thirty Eighth Conference on Learning Theory, PMLR 291:4668-4685, 2025.

Abstract

Score-matching generative models have proven successful at sampling from complex high-dimensional data distributions. In many applications, this distribution is believed to concentrate on a much lower $d$-dimensional manifold embedded into $D$-dimensional space; this is known as the manifold hypothesis. The current best-known convergence guarantees are either linear in $D$ or polynomial (superlinear) in $d$. The latter exploits a novel integration scheme for the backward SDE. We take the best of both worlds and show that the number of steps diffusion models require in order to converge in Kullback-Leibler (KL) divergence is linear (up to logarithmic terms) in the intrinsic dimension $d$. Moreover, we show that this linear dependency is sharp.

Cite this Paper


BibTeX
@InProceedings{pmlr-v291-potaptchik25a, title = {Linear Convergence of Diffusion Models Under the Manifold Hypothesis}, author = {Potaptchik, Peter and Azangulov, Iskander and Deligiannidis, George}, booktitle = {Proceedings of Thirty Eighth Conference on Learning Theory}, pages = {4668--4685}, year = {2025}, editor = {Haghtalab, Nika and Moitra, Ankur}, volume = {291}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--04 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v291/main/assets/potaptchik25a/potaptchik25a.pdf}, url = {https://proceedings.mlr.press/v291/potaptchik25a.html}, abstract = {Score-matching generative models have proven successful at sampling from complex high-dimensional data distributions. In many applications, this distribution is believed to concentrate on a much lower $d$-dimensional manifold embedded into $D$-dimensional space; this is known as the manifold hypothesis. The current best-known convergence guarantees are either linear in $D$ or polynomial (superlinear) in $d$. The latter exploits a novel integration scheme for the backward SDE. We take the best of both worlds and show that the number of steps diffusion models require in order to converge in Kullback-Leibler (KL) divergence is linear (up to logarithmic terms) in the intrinsic dimension $d$. Moreover, we show that this linear dependency is sharp.} }
Endnote
%0 Conference Paper %T Linear Convergence of Diffusion Models Under the Manifold Hypothesis %A Peter Potaptchik %A Iskander Azangulov %A George Deligiannidis %B Proceedings of Thirty Eighth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Nika Haghtalab %E Ankur Moitra %F pmlr-v291-potaptchik25a %I PMLR %P 4668--4685 %U https://proceedings.mlr.press/v291/potaptchik25a.html %V 291 %X Score-matching generative models have proven successful at sampling from complex high-dimensional data distributions. In many applications, this distribution is believed to concentrate on a much lower $d$-dimensional manifold embedded into $D$-dimensional space; this is known as the manifold hypothesis. The current best-known convergence guarantees are either linear in $D$ or polynomial (superlinear) in $d$. The latter exploits a novel integration scheme for the backward SDE. We take the best of both worlds and show that the number of steps diffusion models require in order to converge in Kullback-Leibler (KL) divergence is linear (up to logarithmic terms) in the intrinsic dimension $d$. Moreover, we show that this linear dependency is sharp.
APA
Potaptchik, P., Azangulov, I. & Deligiannidis, G.. (2025). Linear Convergence of Diffusion Models Under the Manifold Hypothesis. Proceedings of Thirty Eighth Conference on Learning Theory, in Proceedings of Machine Learning Research 291:4668-4685 Available from https://proceedings.mlr.press/v291/potaptchik25a.html.

Related Material