xMADD: A Unified Diffusion Framework for Conditioned Synthesis of Medical Images and Waveforms

Sam Freesun Friedman, Sana Tonekaboni, Arash A. Nargesi, Caroline Uhler, Mahnaz Maddah
Proceedings of the Fifth Machine Learning for Health Symposium, PMLR 297:588-604, 2026.

Abstract

Diffusion models have shown remarkable success in generating high-quality perceptual data, but their use for controlled generation in biomedicine remains limited. We introduce {xMADD} (cross-Modal cross-Attention Denoising Diffusion), a conditional diffusion framework for producing diverse, high-resolution medical data, including cardiac {MRI}, brain {MRI}, and {ECG} waveforms, guided by clinical phenotypes, demographics, and multimodal signals. By incorporating cross-attention over conditional embeddings, {xMADD} enables control over generation. Compared to existing generative approaches, {xMADD} achieves superior image fidelity and stability, while accurately reflecting conditioning phenotypes across modalities. Our results highlight the potential of controlled diffusion-based generation to expand biomedical datasets and facilitate data-sharing without compromising sensitive patient data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v297-friedman26a, title = {{xMADD}: A Unified Diffusion Framework for Conditioned Synthesis of Medical Images and Waveforms}, author = {Friedman, Sam Freesun and Tonekaboni, Sana and Nargesi, Arash A. and Uhler, Caroline and Maddah, Mahnaz}, booktitle = {Proceedings of the Fifth Machine Learning for Health Symposium}, pages = {588--604}, year = {2026}, editor = {Argaw, Peniel and Zhang, Haoran and Jabbour, Sarah and Chandak, Payal and Ji, Jerry and Mukherjee, Sumit and Salaudeen, Olawale and Chang, Trenton and Healey, Elizabeth and Gröger, Fabian and Adibi, Amin and Hegselmann, Stefan and Wild, Benjamin and Noori, Ayush}, volume = {297}, series = {Proceedings of Machine Learning Research}, month = {13--14 Dec}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v297/main/assets/friedman26a/friedman26a.pdf}, url = {https://proceedings.mlr.press/v297/friedman26a.html}, abstract = {Diffusion models have shown remarkable success in generating high-quality perceptual data, but their use for controlled generation in biomedicine remains limited. We introduce {xMADD} (cross-Modal cross-Attention Denoising Diffusion), a conditional diffusion framework for producing diverse, high-resolution medical data, including cardiac {MRI}, brain {MRI}, and {ECG} waveforms, guided by clinical phenotypes, demographics, and multimodal signals. By incorporating cross-attention over conditional embeddings, {xMADD} enables control over generation. Compared to existing generative approaches, {xMADD} achieves superior image fidelity and stability, while accurately reflecting conditioning phenotypes across modalities. Our results highlight the potential of controlled diffusion-based generation to expand biomedical datasets and facilitate data-sharing without compromising sensitive patient data.} }
Endnote
%0 Conference Paper %T xMADD: A Unified Diffusion Framework for Conditioned Synthesis of Medical Images and Waveforms %A Sam Freesun Friedman %A Sana Tonekaboni %A Arash A. Nargesi %A Caroline Uhler %A Mahnaz Maddah %B Proceedings of the Fifth Machine Learning for Health Symposium %C Proceedings of Machine Learning Research %D 2026 %E Peniel Argaw %E Haoran Zhang %E Sarah Jabbour %E Payal Chandak %E Jerry Ji %E Sumit Mukherjee %E Olawale Salaudeen %E Trenton Chang %E Elizabeth Healey %E Fabian Gröger %E Amin Adibi %E Stefan Hegselmann %E Benjamin Wild %E Ayush Noori %F pmlr-v297-friedman26a %I PMLR %P 588--604 %U https://proceedings.mlr.press/v297/friedman26a.html %V 297 %X Diffusion models have shown remarkable success in generating high-quality perceptual data, but their use for controlled generation in biomedicine remains limited. We introduce {xMADD} (cross-Modal cross-Attention Denoising Diffusion), a conditional diffusion framework for producing diverse, high-resolution medical data, including cardiac {MRI}, brain {MRI}, and {ECG} waveforms, guided by clinical phenotypes, demographics, and multimodal signals. By incorporating cross-attention over conditional embeddings, {xMADD} enables control over generation. Compared to existing generative approaches, {xMADD} achieves superior image fidelity and stability, while accurately reflecting conditioning phenotypes across modalities. Our results highlight the potential of controlled diffusion-based generation to expand biomedical datasets and facilitate data-sharing without compromising sensitive patient data.
APA
Friedman, S.F., Tonekaboni, S., Nargesi, A.A., Uhler, C. & Maddah, M.. (2026). xMADD: A Unified Diffusion Framework for Conditioned Synthesis of Medical Images and Waveforms. Proceedings of the Fifth Machine Learning for Health Symposium, in Proceedings of Machine Learning Research 297:588-604 Available from https://proceedings.mlr.press/v297/friedman26a.html.

Related Material