Style Content Decomposition-based Data Augmentation for Domain Generalizable Medical Image Segmentation

Zhiqiang Shen, Peng Cao, Jinzhu Yang, Osmar R. Zaiane, Zhaolin Chen
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:21-35, 2026.

Abstract

Due to domain shifts across diverse medical imaging modalities, learned segmentation models often suffer significant performance degradation during deployment. We posit that these domain shifts can be categorized into two main components: (1) "style" shifts, referring to global disparities in image properties such as illumination, contrast, and color; and (2) "content" shifts, involving local discrepancies in anatomical structures. To address the domain shifts in medical image segmentation, we first factorize an image into style codes and content maps, explicitly modeling the "style" and "content" components. Building on this, we introduce a Style-Content decomposition-based data augmentation algorithm (StyCona), which performs augmentation on both the global style and local content of source-domain images, enabling the training of a well-generalized model for domain generalizable medical image segmentation. StyCona is a simple yet effective plug-and-play module that substantially improves model generalization without requiring additional training parameters or modifications to segmentation model architectures. Experiments on cardiac magnetic resonance imaging and fundus photography segmentation tasks, with single and multiple target domains respectively, demonstrate the effectiveness of StyCona and its superiority over state-of-the-art domain generalization methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v315-shen26a, title = {Style Content Decomposition-based Data Augmentation for Domain Generalizable Medical Image Segmentation}, author = {Shen, Zhiqiang and Cao, Peng and Yang, Jinzhu and Zaiane, Osmar R. and Chen, Zhaolin}, booktitle = {Proceedings of The 9th International Conference on Medical Imaging with Deep Learning}, pages = {21--35}, year = {2026}, editor = {Huo, Yuankai and Gao, Mingchen and Kuo, Chang-Fu and Jin, Yueming and Deng, Ruining}, volume = {315}, series = {Proceedings of Machine Learning Research}, month = {08--10 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v315/main/assets/shen26a/shen26a.pdf}, url = {https://proceedings.mlr.press/v315/shen26a.html}, abstract = {Due to domain shifts across diverse medical imaging modalities, learned segmentation models often suffer significant performance degradation during deployment. We posit that these domain shifts can be categorized into two main components: (1) "style" shifts, referring to global disparities in image properties such as illumination, contrast, and color; and (2) "content" shifts, involving local discrepancies in anatomical structures. To address the domain shifts in medical image segmentation, we first factorize an image into style codes and content maps, explicitly modeling the "style" and "content" components. Building on this, we introduce a Style-Content decomposition-based data augmentation algorithm (StyCona), which performs augmentation on both the global style and local content of source-domain images, enabling the training of a well-generalized model for domain generalizable medical image segmentation. StyCona is a simple yet effective plug-and-play module that substantially improves model generalization without requiring additional training parameters or modifications to segmentation model architectures. Experiments on cardiac magnetic resonance imaging and fundus photography segmentation tasks, with single and multiple target domains respectively, demonstrate the effectiveness of StyCona and its superiority over state-of-the-art domain generalization methods.} }
Endnote
%0 Conference Paper %T Style Content Decomposition-based Data Augmentation for Domain Generalizable Medical Image Segmentation %A Zhiqiang Shen %A Peng Cao %A Jinzhu Yang %A Osmar R. Zaiane %A Zhaolin Chen %B Proceedings of The 9th International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2026 %E Yuankai Huo %E Mingchen Gao %E Chang-Fu Kuo %E Yueming Jin %E Ruining Deng %F pmlr-v315-shen26a %I PMLR %P 21--35 %U https://proceedings.mlr.press/v315/shen26a.html %V 315 %X Due to domain shifts across diverse medical imaging modalities, learned segmentation models often suffer significant performance degradation during deployment. We posit that these domain shifts can be categorized into two main components: (1) "style" shifts, referring to global disparities in image properties such as illumination, contrast, and color; and (2) "content" shifts, involving local discrepancies in anatomical structures. To address the domain shifts in medical image segmentation, we first factorize an image into style codes and content maps, explicitly modeling the "style" and "content" components. Building on this, we introduce a Style-Content decomposition-based data augmentation algorithm (StyCona), which performs augmentation on both the global style and local content of source-domain images, enabling the training of a well-generalized model for domain generalizable medical image segmentation. StyCona is a simple yet effective plug-and-play module that substantially improves model generalization without requiring additional training parameters or modifications to segmentation model architectures. Experiments on cardiac magnetic resonance imaging and fundus photography segmentation tasks, with single and multiple target domains respectively, demonstrate the effectiveness of StyCona and its superiority over state-of-the-art domain generalization methods.
APA
Shen, Z., Cao, P., Yang, J., Zaiane, O.R. & Chen, Z.. (2026). Style Content Decomposition-based Data Augmentation for Domain Generalizable Medical Image Segmentation. Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 315:21-35 Available from https://proceedings.mlr.press/v315/shen26a.html.

Related Material