Sampling is as easy as keeping the consistency: convergence guarantee for Consistency Models

Junlong Lyu, Zhitang Chen, Shoubo Feng
Proceedings of the 41st International Conference on Machine Learning, PMLR 235:33664-33685, 2024.

Abstract

We provide the first convergence guarantee for the Consistency Models (CMs), a newly emerging type of one-step generative models that is capable of generating comparable samples to those sampled from state-of-the-art Diffusion Models. Our main result is that, under the basic assumptions on score-matching errors, consistency errors, and smoothness of the data distribution, CMs can efficiently generate samples in one step with small $W_2$ error to any real data distribution. Our results (1) hold for $L^2$-accurate assumptions on both score and consistency functions (rather than $L^\infty$-accurate assumptions); (2) do not require strong assumptions on the data distribution such as log-Sobelev conditions; (3) scale polynomially in all parameters; and (4) match the state-of-the-art convergence guarantee for score-based generative models. We also show that the Multi-step Consistency Sampling procedure can further reduce the error comparing to one step sampling, which supports the original statement from Song Yang’s work. Our result can be generalized to arbitrary bounded data distributions that may be supported on some low-dimensional sub-manifolds. Our results further imply TV error guarantees when making some Langevin-based modifications to the output distributions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v235-lyu24b, title = {Sampling is as easy as keeping the consistency: convergence guarantee for Consistency Models}, author = {Lyu, Junlong and Chen, Zhitang and Feng, Shoubo}, booktitle = {Proceedings of the 41st International Conference on Machine Learning}, pages = {33664--33685}, year = {2024}, editor = {Salakhutdinov, Ruslan and Kolter, Zico and Heller, Katherine and Weller, Adrian and Oliver, Nuria and Scarlett, Jonathan and Berkenkamp, Felix}, volume = {235}, series = {Proceedings of Machine Learning Research}, month = {21--27 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v235/main/assets/lyu24b/lyu24b.pdf}, url = {https://proceedings.mlr.press/v235/lyu24b.html}, abstract = {We provide the first convergence guarantee for the Consistency Models (CMs), a newly emerging type of one-step generative models that is capable of generating comparable samples to those sampled from state-of-the-art Diffusion Models. Our main result is that, under the basic assumptions on score-matching errors, consistency errors, and smoothness of the data distribution, CMs can efficiently generate samples in one step with small $W_2$ error to any real data distribution. Our results (1) hold for $L^2$-accurate assumptions on both score and consistency functions (rather than $L^\infty$-accurate assumptions); (2) do not require strong assumptions on the data distribution such as log-Sobelev conditions; (3) scale polynomially in all parameters; and (4) match the state-of-the-art convergence guarantee for score-based generative models. We also show that the Multi-step Consistency Sampling procedure can further reduce the error comparing to one step sampling, which supports the original statement from Song Yang’s work. Our result can be generalized to arbitrary bounded data distributions that may be supported on some low-dimensional sub-manifolds. Our results further imply TV error guarantees when making some Langevin-based modifications to the output distributions.} }
Endnote
%0 Conference Paper %T Sampling is as easy as keeping the consistency: convergence guarantee for Consistency Models %A Junlong Lyu %A Zhitang Chen %A Shoubo Feng %B Proceedings of the 41st International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2024 %E Ruslan Salakhutdinov %E Zico Kolter %E Katherine Heller %E Adrian Weller %E Nuria Oliver %E Jonathan Scarlett %E Felix Berkenkamp %F pmlr-v235-lyu24b %I PMLR %P 33664--33685 %U https://proceedings.mlr.press/v235/lyu24b.html %V 235 %X We provide the first convergence guarantee for the Consistency Models (CMs), a newly emerging type of one-step generative models that is capable of generating comparable samples to those sampled from state-of-the-art Diffusion Models. Our main result is that, under the basic assumptions on score-matching errors, consistency errors, and smoothness of the data distribution, CMs can efficiently generate samples in one step with small $W_2$ error to any real data distribution. Our results (1) hold for $L^2$-accurate assumptions on both score and consistency functions (rather than $L^\infty$-accurate assumptions); (2) do not require strong assumptions on the data distribution such as log-Sobelev conditions; (3) scale polynomially in all parameters; and (4) match the state-of-the-art convergence guarantee for score-based generative models. We also show that the Multi-step Consistency Sampling procedure can further reduce the error comparing to one step sampling, which supports the original statement from Song Yang’s work. Our result can be generalized to arbitrary bounded data distributions that may be supported on some low-dimensional sub-manifolds. Our results further imply TV error guarantees when making some Langevin-based modifications to the output distributions.
APA
Lyu, J., Chen, Z. & Feng, S.. (2024). Sampling is as easy as keeping the consistency: convergence guarantee for Consistency Models. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:33664-33685 Available from https://proceedings.mlr.press/v235/lyu24b.html.

Related Material