Train Once, Deploy Anywhere: Edge-Guided Single-source Domain Generalization for Medical Image Segmentation

Jun Jiang, Shi Gu
Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, PMLR 250:722-741, 2024.

Abstract

In medical image analysis, unsupervised domain adaptation models require retraining when receiving samples from a new data distribution, and multi-source domain generalization methods might be infeasible when there is only a single source domain. These will pose formidable obstacles to model deployment. To this end, we take the T̈rain Once, Deploy Anywhere\"{as} our objective and consider a challenging but practical problem: Single-source Domain Generalization (SDG). Meanwhile, we note that (i) the medical image segmentation applications where generalization errors often come from imprecise predictions at the ambiguous boundary of anatomies and (ii) the edge of the image is domain-invariant, which can reduce the domain shift between the source and target domain in all network layers. Specifically, we borrow the prior knowledge from Digital Image Processing and take the edge of the image as input to enhance the model attention at the boundary of anatomies and improve the generalization performance on unknown target domains. Extensive experiments on three typical medical image segmentation datasets, which cover cross-sequence, cross-center, and cross-modality settings with various anatomical structures, demonstrate our method achieves superior generalization performance compared to the state-of-the-art SDG methods. The code is available at https://github.com/thinkdifferentor/EGSDG.

Cite this Paper


BibTeX
@InProceedings{pmlr-v250-jiang24a, title = {Train Once, Deploy Anywhere: Edge-Guided Single-source Domain Generalization for Medical Image Segmentation}, author = {Jiang, Jun and Gu, Shi}, booktitle = {Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning}, pages = {722--741}, year = {2024}, editor = {Burgos, Ninon and Petitjean, Caroline and Vakalopoulou, Maria and Christodoulidis, Stergios and Coupe, Pierrick and Delingette, Hervé and Lartizien, Carole and Mateus, Diana}, volume = {250}, series = {Proceedings of Machine Learning Research}, month = {03--05 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v250/main/assets/jiang24a/jiang24a.pdf}, url = {https://proceedings.mlr.press/v250/jiang24a.html}, abstract = {In medical image analysis, unsupervised domain adaptation models require retraining when receiving samples from a new data distribution, and multi-source domain generalization methods might be infeasible when there is only a single source domain. These will pose formidable obstacles to model deployment. To this end, we take the T̈rain Once, Deploy Anywhere\"{as} our objective and consider a challenging but practical problem: Single-source Domain Generalization (SDG). Meanwhile, we note that (i) the medical image segmentation applications where generalization errors often come from imprecise predictions at the ambiguous boundary of anatomies and (ii) the edge of the image is domain-invariant, which can reduce the domain shift between the source and target domain in all network layers. Specifically, we borrow the prior knowledge from Digital Image Processing and take the edge of the image as input to enhance the model attention at the boundary of anatomies and improve the generalization performance on unknown target domains. Extensive experiments on three typical medical image segmentation datasets, which cover cross-sequence, cross-center, and cross-modality settings with various anatomical structures, demonstrate our method achieves superior generalization performance compared to the state-of-the-art SDG methods. The code is available at https://github.com/thinkdifferentor/EGSDG.} }
Endnote
%0 Conference Paper %T Train Once, Deploy Anywhere: Edge-Guided Single-source Domain Generalization for Medical Image Segmentation %A Jun Jiang %A Shi Gu %B Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning %C Proceedings of Machine Learning Research %D 2024 %E Ninon Burgos %E Caroline Petitjean %E Maria Vakalopoulou %E Stergios Christodoulidis %E Pierrick Coupe %E Hervé Delingette %E Carole Lartizien %E Diana Mateus %F pmlr-v250-jiang24a %I PMLR %P 722--741 %U https://proceedings.mlr.press/v250/jiang24a.html %V 250 %X In medical image analysis, unsupervised domain adaptation models require retraining when receiving samples from a new data distribution, and multi-source domain generalization methods might be infeasible when there is only a single source domain. These will pose formidable obstacles to model deployment. To this end, we take the T̈rain Once, Deploy Anywhere\"{as} our objective and consider a challenging but practical problem: Single-source Domain Generalization (SDG). Meanwhile, we note that (i) the medical image segmentation applications where generalization errors often come from imprecise predictions at the ambiguous boundary of anatomies and (ii) the edge of the image is domain-invariant, which can reduce the domain shift between the source and target domain in all network layers. Specifically, we borrow the prior knowledge from Digital Image Processing and take the edge of the image as input to enhance the model attention at the boundary of anatomies and improve the generalization performance on unknown target domains. Extensive experiments on three typical medical image segmentation datasets, which cover cross-sequence, cross-center, and cross-modality settings with various anatomical structures, demonstrate our method achieves superior generalization performance compared to the state-of-the-art SDG methods. The code is available at https://github.com/thinkdifferentor/EGSDG.
APA
Jiang, J. & Gu, S.. (2024). Train Once, Deploy Anywhere: Edge-Guided Single-source Domain Generalization for Medical Image Segmentation. Proceedings of The 7nd International Conference on Medical Imaging with Deep Learning, in Proceedings of Machine Learning Research 250:722-741 Available from https://proceedings.mlr.press/v250/jiang24a.html.

Related Material