Adaptive Time-Stepping Schedules for Diffusion Models

Yuzhu Chen, Fengxiang He, Shi Fu, Xinmei Tian, Dacheng Tao
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, PMLR 244:685-697, 2024.

Abstract

This paper studies how to tune the stepping schedule in diffusion models, which is mostly fixed in current practice, lacking theoretical foundations and assurance of optimal performance at the chosen discretization points. In this paper, we advocate the use of adaptive time-stepping schedules and design two algorithms with an optimized sampling error bound $EB$: (1) for continuous diffusion, we treat $EB$ as the loss function to discretization points and run gradient descent to adjust them; and (2) for discrete diffusion, we propose a greedy algorithm that adjusts only one discretization point to its best position in each iteration. We conducted extensive experiments that show (1) improved generation ability in well-trained models, and (2) premature though usable generation ability in under-trained models. The code is submitted and will be released publicly.

Cite this Paper


BibTeX
@InProceedings{pmlr-v244-chen24c, title = {Adaptive Time-Stepping Schedules for Diffusion Models}, author = {Chen, Yuzhu and He, Fengxiang and Fu, Shi and Tian, Xinmei and Tao, Dacheng}, booktitle = {Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence}, pages = {685--697}, year = {2024}, editor = {Kiyavash, Negar and Mooij, Joris M.}, volume = {244}, series = {Proceedings of Machine Learning Research}, month = {15--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v244/main/assets/chen24c/chen24c.pdf}, url = {https://proceedings.mlr.press/v244/chen24c.html}, abstract = {This paper studies how to tune the stepping schedule in diffusion models, which is mostly fixed in current practice, lacking theoretical foundations and assurance of optimal performance at the chosen discretization points. In this paper, we advocate the use of adaptive time-stepping schedules and design two algorithms with an optimized sampling error bound $EB$: (1) for continuous diffusion, we treat $EB$ as the loss function to discretization points and run gradient descent to adjust them; and (2) for discrete diffusion, we propose a greedy algorithm that adjusts only one discretization point to its best position in each iteration. We conducted extensive experiments that show (1) improved generation ability in well-trained models, and (2) premature though usable generation ability in under-trained models. The code is submitted and will be released publicly.} }
Endnote
%0 Conference Paper %T Adaptive Time-Stepping Schedules for Diffusion Models %A Yuzhu Chen %A Fengxiang He %A Shi Fu %A Xinmei Tian %A Dacheng Tao %B Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2024 %E Negar Kiyavash %E Joris M. Mooij %F pmlr-v244-chen24c %I PMLR %P 685--697 %U https://proceedings.mlr.press/v244/chen24c.html %V 244 %X This paper studies how to tune the stepping schedule in diffusion models, which is mostly fixed in current practice, lacking theoretical foundations and assurance of optimal performance at the chosen discretization points. In this paper, we advocate the use of adaptive time-stepping schedules and design two algorithms with an optimized sampling error bound $EB$: (1) for continuous diffusion, we treat $EB$ as the loss function to discretization points and run gradient descent to adjust them; and (2) for discrete diffusion, we propose a greedy algorithm that adjusts only one discretization point to its best position in each iteration. We conducted extensive experiments that show (1) improved generation ability in well-trained models, and (2) premature though usable generation ability in under-trained models. The code is submitted and will be released publicly.
APA
Chen, Y., He, F., Fu, S., Tian, X. & Tao, D.. (2024). Adaptive Time-Stepping Schedules for Diffusion Models. Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 244:685-697 Available from https://proceedings.mlr.press/v244/chen24c.html.

Related Material