Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities

Ruchika Chavhan, Abhinav Mehrotra, Malcolm Chadwick, Alberto Gil Couto Pimentel Ramos, Luca Morreale, Mehdi Noroozi, Sourav Bhattacharya
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:7578-7594, 2025.

Abstract

Text-to-image synthesis has witnessed remarkable advancements in recent years. Many attempts have been made to adopt text-to-image models to support multiple tasks. However, existing approaches typically require resource-intensive re-training or additional parameters to accommodate for the new tasks, which makes the model inefficient for on-device deployment. We propose Multi-Task Upcycling (MTU), a simple yet effective recipe that extends the capabilities of a pre-trained text-to-image diffusion model to support a variety of image-to-image generation tasks. MTU replaces Feed-Forward Network (FFN) layers in the diffusion model with smaller FFNs, referred to as experts, and combines them with a dynamic routing mechanism. To the best of our knowledge, MTU is the first multi-task diffusion modeling approach that seamlessly blends multi-tasking with on-device compatibility, by mitigating the issue of parameter inflation. We show that the performance of MTU is on par with the single-task fine-tuned diffusion models across several tasks including image editing, super-resolution, and inpainting, while maintaining similar latency and computational load (GFLOPs) as the single-task fine-tuned models.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-chavhan25a, title = {Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities}, author = {Chavhan, Ruchika and Mehrotra, Abhinav and Chadwick, Malcolm and Couto Pimentel Ramos, Alberto Gil and Morreale, Luca and Noroozi, Mehdi and Bhattacharya, Sourav}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {7578--7594}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/chavhan25a/chavhan25a.pdf}, url = {https://proceedings.mlr.press/v267/chavhan25a.html}, abstract = {Text-to-image synthesis has witnessed remarkable advancements in recent years. Many attempts have been made to adopt text-to-image models to support multiple tasks. However, existing approaches typically require resource-intensive re-training or additional parameters to accommodate for the new tasks, which makes the model inefficient for on-device deployment. We propose Multi-Task Upcycling (MTU), a simple yet effective recipe that extends the capabilities of a pre-trained text-to-image diffusion model to support a variety of image-to-image generation tasks. MTU replaces Feed-Forward Network (FFN) layers in the diffusion model with smaller FFNs, referred to as experts, and combines them with a dynamic routing mechanism. To the best of our knowledge, MTU is the first multi-task diffusion modeling approach that seamlessly blends multi-tasking with on-device compatibility, by mitigating the issue of parameter inflation. We show that the performance of MTU is on par with the single-task fine-tuned diffusion models across several tasks including image editing, super-resolution, and inpainting, while maintaining similar latency and computational load (GFLOPs) as the single-task fine-tuned models.} }
Endnote
%0 Conference Paper %T Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities %A Ruchika Chavhan %A Abhinav Mehrotra %A Malcolm Chadwick %A Alberto Gil Couto Pimentel Ramos %A Luca Morreale %A Mehdi Noroozi %A Sourav Bhattacharya %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-chavhan25a %I PMLR %P 7578--7594 %U https://proceedings.mlr.press/v267/chavhan25a.html %V 267 %X Text-to-image synthesis has witnessed remarkable advancements in recent years. Many attempts have been made to adopt text-to-image models to support multiple tasks. However, existing approaches typically require resource-intensive re-training or additional parameters to accommodate for the new tasks, which makes the model inefficient for on-device deployment. We propose Multi-Task Upcycling (MTU), a simple yet effective recipe that extends the capabilities of a pre-trained text-to-image diffusion model to support a variety of image-to-image generation tasks. MTU replaces Feed-Forward Network (FFN) layers in the diffusion model with smaller FFNs, referred to as experts, and combines them with a dynamic routing mechanism. To the best of our knowledge, MTU is the first multi-task diffusion modeling approach that seamlessly blends multi-tasking with on-device compatibility, by mitigating the issue of parameter inflation. We show that the performance of MTU is on par with the single-task fine-tuned diffusion models across several tasks including image editing, super-resolution, and inpainting, while maintaining similar latency and computational load (GFLOPs) as the single-task fine-tuned models.
APA
Chavhan, R., Mehrotra, A., Chadwick, M., Couto Pimentel Ramos, A.G., Morreale, L., Noroozi, M. & Bhattacharya, S.. (2025). Upcycling Text-to-Image Diffusion Models for Multi-Task Capabilities. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:7578-7594 Available from https://proceedings.mlr.press/v267/chavhan25a.html.

Related Material