Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces

Kevin Rojas, Yuchen Zhu, Sichen Zhu, Felix X-F. Ye, Molei Tao
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:51924-51956, 2025.

Abstract

Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance. Code is available at https://github.com/KevinRojas1499/Diffuse-Everything.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-rojas25a, title = {Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces}, author = {Rojas, Kevin and Zhu, Yuchen and Zhu, Sichen and Ye, Felix X-F. and Tao, Molei}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {51924--51956}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/rojas25a/rojas25a.pdf}, url = {https://proceedings.mlr.press/v267/rojas25a.html}, abstract = {Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance. Code is available at https://github.com/KevinRojas1499/Diffuse-Everything.} }
Endnote
%0 Conference Paper %T Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces %A Kevin Rojas %A Yuchen Zhu %A Sichen Zhu %A Felix X-F. Ye %A Molei Tao %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-rojas25a %I PMLR %P 51924--51956 %U https://proceedings.mlr.press/v267/rojas25a.html %V 267 %X Diffusion models have demonstrated remarkable performance in generating unimodal data across various tasks, including image, video, and text generation. On the contrary, the joint generation of multimodal data through diffusion models is still in the early stages of exploration. Existing approaches heavily rely on external preprocessing protocols, such as tokenizers and variational autoencoders, to harmonize varied data representations into a unified, unimodal format. This process heavily demands the high accuracy of encoders and decoders, which can be problematic for applications with limited data. To lift this restriction, we propose a novel framework for building multimodal diffusion models on arbitrary state spaces, enabling native generation of coupled data across different modalities. By introducing an innovative decoupled noise schedule for each modality, we enable both unconditional and modality-conditioned generation within a single model simultaneously. We empirically validate our approach for text-image generation and mixed-type tabular data synthesis, demonstrating that it achieves competitive performance. Code is available at https://github.com/KevinRojas1499/Diffuse-Everything.
APA
Rojas, K., Zhu, Y., Zhu, S., Ye, F.X. & Tao, M.. (2025). Diffuse Everything: Multimodal Diffusion Models on Arbitrary State Spaces. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:51924-51956 Available from https://proceedings.mlr.press/v267/rojas25a.html.

Related Material