[edit]
A Simple yet Effective Adaptive Inter-organ Contrastive Learning Framework for Unsupervised Domain Adaptation
Proceedings of The 9th International Conference on Medical Imaging with Deep Learning, PMLR 315:1523-1538, 2026.
Abstract
Strong unsupervised domain adaptation (UDA) in multi-organ segmentation seeks to unify complementary information from heterogeneous imaging protocols within a single model without sacrificing source-modality performance, yet the substantial domain gap between modalities makes feature-level alignment non-trivial. Pseudo-label learning (PLL) has emerged as the dominant paradigm, but it suffers from information loss due to hard thresholding and bias introduced by class imbalance and noisy predictions. Contrastive learning (CL) offers a complementary direction by structuring semantic constrast, yet existing voxel-level formulations incur prohibitive computational costs on volumetric data and fail to capture the global anatomical context critical for organ segmentation. In this work, we propose Adaptive Inter-organ Contrastive Learning (AICL), a unified UDA framework for 3D multi-organ cross-modality segmentation that exploits PPL and CL synergistically to facilitate better cross-modality feature alignment. AICL employs dynamic soft pseudo-labels as guidance in the feature latent space to organize for inter-organ samples as positive-negative pairs for CL. Meanwhile, the model is trained with supervised consistency learning (SCL) using mixed ground truths and pseudo-labels, promoting a more discriminative and compact shared latent space. Extensive experiments and ablation studies on an orbital and a cardiac dataset reveal the effectiveness of each component and a significant advancement in segmentation results.