[edit]
Unpaired Data based Cross-domain Synthesis and Segmentation Using Attention Neural Network
Proceedings of The Eleventh Asian Conference on Machine Learning, PMLR 101:987-1000, 2019.
Abstract
Medical images from different modalities (e.g. MRI, CT) or contrasts (e.g. T1, T2) are usually used to extract abundant information for medical image analysis. Some modalities or contrasts may be degraded or missing, caused by artifacts or strict timing during acquisition. Thus synthesizing realistic medical images in the required domain is meaningful and helpful for clinical application. Meanwhile, due to the time-consuming of manual annotation, automatic medical image segmentation has attracted much attention. In this paper, we propose an end-to-end cross-domain synthesis and segmentation framework SSA-Net. It is based on cycle generative adversarial network (CycleGAN) for unpaired data. We introduce a gradient consistent term to refine the boundaries in synthesized images. Besides, we design a special shape consistent term to constrain the anatomical structure in synthesized images and to guide segmentation without target domian labels. In order to make the synthesis subnet focusing on some hard-to-learn regions automatically, we also introduce the attention block into the generator. On two challenging validation datasets (CHAOS and iSeg-2017), the proposed method achieves superior synthesis performance and comparable segmentation performance.