{"title":"A 3-D Anatomy-Guided Self-Training Segmentation Framework for Unpaired Cross-Modality Medical Image Segmentation","authors":"Yuzhou Zhuang;Hong Liu;Enmin Song;Xiangyang Xu;Yongde Liao;Guanchao Ye;Chih-Cheng Hung","doi":"10.1109/TRPMS.2023.3332619","DOIUrl":null,"url":null,"abstract":"Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":null,"pages":null},"PeriodicalIF":4.6000,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10317880/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Unsupervised domain adaptation (UDA) methods have achieved promising performance in alleviating the domain shift between different imaging modalities. In this article, we propose a robust two-stage 3-D anatomy-guided self-training cross-modality segmentation (ASTCMSeg) framework based on UDA for unpaired cross-modality image segmentation, including the anatomy-guided image translation and self-training segmentation stages. In the translation stage, we first leverage the similarity distributions between patches to capture the latent anatomical relationships and propose an anatomical relation consistency (ARC) for preserving the correct anatomical relationships. Then, we design a frequency domain constraint to enforce the consistency of important frequency components during image translation. Finally, we integrate the ARC and frequency domain constraint with contrastive learning for anatomy-guided image translation. In the segmentation stage, we propose a context-aware anisotropic mesh network for segmenting anisotropic volumes in the target domain. Meanwhile, we design a volumetric adaptive self-training method that dynamically selects appropriate pseudo-label thresholds to learn the abundant label information from unlabeled target volumes. Our proposed method is validated on the cross-modality brain structure, cardiac substructure, and abdominal multiorgan segmentation tasks. Experimental results show that our proposed method achieves state-of-the-art performance in all tasks and significantly outperforms other 2-D based or 3-D based UDA methods.