ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval

Kexun Zhang, Xianjun Yang, William Yang Wang, Lei Li
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:41770-41785, 2023.

Abstract

Diffusion models show promising generation capability for a variety of data. Despite their high generation quality, the inference for diffusion models is still time-consuming due to the numerous sampling iterations required. To accelerate the inference, we propose ReDi, a simple yet learning-free Retrieval-based Diffusion sampling framework. From a precomputed knowledge base, ReDi retrieves a trajectory similar to the partially generated trajectory at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory. We theoretically prove that the generation performance of ReDi is guaranteed. Our experiments demonstrate that ReDi improves the model inference efficiency by 2$\times$ speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain image generation such as image stylization. The code and demo for ReDi is available at https://github.com/zkx06111/ReDiffusion.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-zhang23as, title = {{R}e{D}i: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval}, author = {Zhang, Kexun and Yang, Xianjun and Wang, William Yang and Li, Lei}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {41770--41785}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/zhang23as/zhang23as.pdf}, url = {https://proceedings.mlr.press/v202/zhang23as.html}, abstract = {Diffusion models show promising generation capability for a variety of data. Despite their high generation quality, the inference for diffusion models is still time-consuming due to the numerous sampling iterations required. To accelerate the inference, we propose ReDi, a simple yet learning-free Retrieval-based Diffusion sampling framework. From a precomputed knowledge base, ReDi retrieves a trajectory similar to the partially generated trajectory at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory. We theoretically prove that the generation performance of ReDi is guaranteed. Our experiments demonstrate that ReDi improves the model inference efficiency by 2$\times$ speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain image generation such as image stylization. The code and demo for ReDi is available at https://github.com/zkx06111/ReDiffusion.} }
Endnote
%0 Conference Paper %T ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval %A Kexun Zhang %A Xianjun Yang %A William Yang Wang %A Lei Li %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-zhang23as %I PMLR %P 41770--41785 %U https://proceedings.mlr.press/v202/zhang23as.html %V 202 %X Diffusion models show promising generation capability for a variety of data. Despite their high generation quality, the inference for diffusion models is still time-consuming due to the numerous sampling iterations required. To accelerate the inference, we propose ReDi, a simple yet learning-free Retrieval-based Diffusion sampling framework. From a precomputed knowledge base, ReDi retrieves a trajectory similar to the partially generated trajectory at an early stage of generation, skips a large portion of intermediate steps, and continues sampling from a later step in the retrieved trajectory. We theoretically prove that the generation performance of ReDi is guaranteed. Our experiments demonstrate that ReDi improves the model inference efficiency by 2$\times$ speedup. Furthermore, ReDi is able to generalize well in zero-shot cross-domain image generation such as image stylization. The code and demo for ReDi is available at https://github.com/zkx06111/ReDiffusion.
APA
Zhang, K., Yang, X., Wang, W.Y. & Li, L.. (2023). ReDi: Efficient Learning-Free Diffusion Inference via Trajectory Retrieval. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:41770-41785 Available from https://proceedings.mlr.press/v202/zhang23as.html.

Related Material