MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation

Omer Bar-Tal, Lior Yariv, Yaron Lipman, Tali Dekel
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:1737-1752, 2023.

Abstract

Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-bar-tal23a, title = {{M}ulti{D}iffusion: Fusing Diffusion Paths for Controlled Image Generation}, author = {Bar-Tal, Omer and Yariv, Lior and Lipman, Yaron and Dekel, Tali}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {1737--1752}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/bar-tal23a/bar-tal23a.pdf}, url = {https://proceedings.mlr.press/v202/bar-tal23a.html}, abstract = {Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.} }
Endnote
%0 Conference Paper %T MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation %A Omer Bar-Tal %A Lior Yariv %A Yaron Lipman %A Tali Dekel %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-bar-tal23a %I PMLR %P 1737--1752 %U https://proceedings.mlr.press/v202/bar-tal23a.html %V 202 %X Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.
APA
Bar-Tal, O., Yariv, L., Lipman, Y. & Dekel, T.. (2023). MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:1737-1752 Available from https://proceedings.mlr.press/v202/bar-tal23a.html.

Related Material