MuLan: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost

Sen Xing, Muyan Zhong, Zeqiang Lai, Liangchen Li, Jiawen Liu, Yaohui Wang, Jifeng Dai, Wenhai Wang
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:68953-68969, 2025.

Abstract

In this work, we explore a cost-effective framework for multilingual image generation. We find that, unlike models tuned on high-quality images with multilingual annotations, leveraging text encoders pre-trained on widely available, noisy Internet image-text pairs significantly enhances data efficiency in text-to-image (T2I) generation across multiple languages. Based on this insight, we introduce MuLan, Multi-Language adapter, a lightweight language adapter with fewer than 20M parameters, trained alongside a frozen text encoder and image diffusion model. Compared to previous multilingual T2I models, this framework offers: (1) Cost efficiency. Using readily accessible English data and off-the-shelf multilingual text encoders minimizes the training cost; (2) High performance. Achieving comparable generation capabilities in over 110 languages with CLIP similarity scores nearly matching those in English (39.57 for English vs. 39.61 for other languages); and (3) Broad applicability. Seamlessly integrating with compatible community tools like LoRA, LCM, ControlNet, and IP-Adapter, expanding its potential use cases.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-xing25d, title = {{M}u{L}an: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost}, author = {Xing, Sen and Zhong, Muyan and Lai, Zeqiang and Li, Liangchen and Liu, Jiawen and Wang, Yaohui and Dai, Jifeng and Wang, Wenhai}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {68953--68969}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/xing25d/xing25d.pdf}, url = {https://proceedings.mlr.press/v267/xing25d.html}, abstract = {In this work, we explore a cost-effective framework for multilingual image generation. We find that, unlike models tuned on high-quality images with multilingual annotations, leveraging text encoders pre-trained on widely available, noisy Internet image-text pairs significantly enhances data efficiency in text-to-image (T2I) generation across multiple languages. Based on this insight, we introduce MuLan, Multi-Language adapter, a lightweight language adapter with fewer than 20M parameters, trained alongside a frozen text encoder and image diffusion model. Compared to previous multilingual T2I models, this framework offers: (1) Cost efficiency. Using readily accessible English data and off-the-shelf multilingual text encoders minimizes the training cost; (2) High performance. Achieving comparable generation capabilities in over 110 languages with CLIP similarity scores nearly matching those in English (39.57 for English vs. 39.61 for other languages); and (3) Broad applicability. Seamlessly integrating with compatible community tools like LoRA, LCM, ControlNet, and IP-Adapter, expanding its potential use cases.} }
Endnote
%0 Conference Paper %T MuLan: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost %A Sen Xing %A Muyan Zhong %A Zeqiang Lai %A Liangchen Li %A Jiawen Liu %A Yaohui Wang %A Jifeng Dai %A Wenhai Wang %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-xing25d %I PMLR %P 68953--68969 %U https://proceedings.mlr.press/v267/xing25d.html %V 267 %X In this work, we explore a cost-effective framework for multilingual image generation. We find that, unlike models tuned on high-quality images with multilingual annotations, leveraging text encoders pre-trained on widely available, noisy Internet image-text pairs significantly enhances data efficiency in text-to-image (T2I) generation across multiple languages. Based on this insight, we introduce MuLan, Multi-Language adapter, a lightweight language adapter with fewer than 20M parameters, trained alongside a frozen text encoder and image diffusion model. Compared to previous multilingual T2I models, this framework offers: (1) Cost efficiency. Using readily accessible English data and off-the-shelf multilingual text encoders minimizes the training cost; (2) High performance. Achieving comparable generation capabilities in over 110 languages with CLIP similarity scores nearly matching those in English (39.57 for English vs. 39.61 for other languages); and (3) Broad applicability. Seamlessly integrating with compatible community tools like LoRA, LCM, ControlNet, and IP-Adapter, expanding its potential use cases.
APA
Xing, S., Zhong, M., Lai, Z., Li, L., Liu, J., Wang, Y., Dai, J. & Wang, W.. (2025). MuLan: Adapting Multilingual Diffusion Models for Hundreds of Languages with Negligible Cost. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:68953-68969 Available from https://proceedings.mlr.press/v267/xing25d.html.

Related Material