Heterogeneous Medical Data Integration with Multi-Source StyleGAN

Wei-Cheng Lai, Matthias Kirchler, Hadya Yassin, Jana Fehr, Alexander Rakowski, Hampus Olsson, Ludger Starke, Jason M. Millward, Sonia Waiczies, Christoph Lippert
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:857-887, 2024.

Abstract

Conditional deep generative models have emerged as powerful tools for generating realistic images enabling fine-grained control over latent factors. In the medical domain, data scarcity and the need to integrate information from diverse sources present challenges for existing generative models, often resulting in low-quality image generation and poor controllability. To address these two issues, we propose Multi-Source StyleGAN (MSSG). MSSG learns jointly from multiple heterogeneous data sources with different available covariates and can generate new images controlling all covariates together, thereby overcoming both data scarcity and heterogeneity.We validate our method on semi-synthetic data of hand-written digit images with varying morphological features and in controlled multi-source simulations on retinal fundus images and brain magnetic resonance images. Finally, we apply MSSG in a real-world setting of brain MRI from different sources. Our proposed algorithm offers a promising direction for unbiased data generation from disparate sources. For the reproducibility of our experimental results, we provide [detailed code implementation](https://github.com/weslai/msstylegans).

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-lai24a, title = {Heterogeneous Medical Data Integration with Multi-Source StyleGAN}, author = {Lai, Wei-Cheng and Kirchler, Matthias and Yassin, Hadya and Fehr, Jana and Rakowski, Alexander and Olsson, Hampus and Starke, Ludger and Millward, Jason M. and Waiczies, Sonia and Lippert, Christoph}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {857--887}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/lai24a/lai24a.pdf}, url = {https://proceedings.mlr.press/v250/lai24a.html}, abstract = {Conditional deep generative models have emerged as powerful tools for generating realistic images enabling fine-grained control over latent factors. In the medical domain, data scarcity and the need to integrate information from diverse sources present challenges for existing generative models, often resulting in low-quality image generation and poor controllability. To address these two issues, we propose Multi-Source StyleGAN (MSSG). MSSG learns jointly from multiple heterogeneous data sources with different available covariates and can generate new images controlling all covariates together, thereby overcoming both data scarcity and heterogeneity.We validate our method on semi-synthetic data of hand-written digit images with varying morphological features and in controlled multi-source simulations on retinal fundus images and brain magnetic resonance images. Finally, we apply MSSG in a real-world setting of brain MRI from different sources. Our proposed algorithm offers a promising direction for unbiased data generation from disparate sources. For the reproducibility of our experimental results, we provide [detailed code implementation](https://github.com/weslai/msstylegans).} }
Endnote
%0 Conference Paper %T Heterogeneous Medical Data Integration with Multi-Source StyleGAN %A Wei-Cheng Lai %A Matthias Kirchler %A Hadya Yassin %A Jana Fehr %A Alexander Rakowski %A Hampus Olsson %A Ludger Starke %A Jason M. Millward %A Sonia Waiczies %A Christoph Lippert %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-lai24a %I PMLR %P 857--887 %U https://proceedings.mlr.press/v250/lai24a.html %V 250 %X Conditional deep generative models have emerged as powerful tools for generating realistic images enabling fine-grained control over latent factors. In the medical domain, data scarcity and the need to integrate information from diverse sources present challenges for existing generative models, often resulting in low-quality image generation and poor controllability. To address these two issues, we propose Multi-Source StyleGAN (MSSG). MSSG learns jointly from multiple heterogeneous data sources with different available covariates and can generate new images controlling all covariates together, thereby overcoming both data scarcity and heterogeneity.We validate our method on semi-synthetic data of hand-written digit images with varying morphological features and in controlled multi-source simulations on retinal fundus images and brain magnetic resonance images. Finally, we apply MSSG in a real-world setting of brain MRI from different sources. Our proposed algorithm offers a promising direction for unbiased data generation from disparate sources. For the reproducibility of our experimental results, we provide [detailed code implementation](https://github.com/weslai/msstylegans).
APA
Lai, W., Kirchler, M., Yassin, H., Fehr, J., Rakowski, A., Olsson, H., Starke, L., Millward, J.M., Waiczies, S. & Lippert, C.. (2024). Heterogeneous Medical Data Integration with Multi-Source StyleGAN. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:857-887 Available from https://proceedings.mlr.press/v250/lai24a.html.

Related Material