Zhiqiang Shen,Peng Cao,Qinghua Zhou,Jinzhu Yang,Osmar R Zaiane
{"title":"Collaborative Learning of Augmentation and Disentanglement for Semi-Supervised Domain Generalized Medical Image Segmentation.","authors":"Zhiqiang Shen,Peng Cao,Qinghua Zhou,Jinzhu Yang,Osmar R Zaiane","doi":"10.1109/tmi.2025.3596247","DOIUrl":null,"url":null,"abstract":"This paper explores a challenging yet realistic scenario: semi-supervised domain generalization (SSDG) that includes label scarcity and domain shift problems. We pinpoint that the limitations of previous SSDG methods lie in 1) neglecting the difference between domain shifts existing within a training dataset (intra-domain shift, IDS) and those occurring between training and testing datasets (cross-domain shift, CDS) and 2) overlooking the interplay between label scarcity and domain shifts, resulting in these methods merely stitching together semi-supervised learning (SSL) and domain generalization (DG) techniques. Considering these limitations, we propose a novel perspective to decompose SSDG into the combination of unsupervised domain adaptation (UDA) and DG problems. To this end, we design a causal augmentation and disentanglement framework (CausalAD) for semi-supervised domain generalized medical image segmentation. Concretely, CausalAD involves two collaborative processes: an augmentation process, which utilizes disentangled style factors to perform style augmentation for UDA, and a disentanglement process, which decouples domain-invariant (content) and domain-variant (noise and style) features for DG. Furthermore, we propose a proxy-based self-paced training strategy (ProSPT) to guide the training of CausalAD by gradually selecting unlabeled image pixels with high-quality pseudo labels in a self-paced training manner. Finally, we introduce a hierarchical structural causal model (HSCM) to explain the intuition and concept behind our method. Extensive experiments in the cross-sequence, cross-site, and cross-modality semi-supervised domain generalized medical image segmentation settings show the effectiveness of CausalAD and its superiority over the state-of-the-art. The code is available at https://github.com/Senyh/CausalAD.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"34 1","pages":""},"PeriodicalIF":9.8000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Medical Imaging","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/tmi.2025.3596247","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper explores a challenging yet realistic scenario: semi-supervised domain generalization (SSDG) that includes label scarcity and domain shift problems. We pinpoint that the limitations of previous SSDG methods lie in 1) neglecting the difference between domain shifts existing within a training dataset (intra-domain shift, IDS) and those occurring between training and testing datasets (cross-domain shift, CDS) and 2) overlooking the interplay between label scarcity and domain shifts, resulting in these methods merely stitching together semi-supervised learning (SSL) and domain generalization (DG) techniques. Considering these limitations, we propose a novel perspective to decompose SSDG into the combination of unsupervised domain adaptation (UDA) and DG problems. To this end, we design a causal augmentation and disentanglement framework (CausalAD) for semi-supervised domain generalized medical image segmentation. Concretely, CausalAD involves two collaborative processes: an augmentation process, which utilizes disentangled style factors to perform style augmentation for UDA, and a disentanglement process, which decouples domain-invariant (content) and domain-variant (noise and style) features for DG. Furthermore, we propose a proxy-based self-paced training strategy (ProSPT) to guide the training of CausalAD by gradually selecting unlabeled image pixels with high-quality pseudo labels in a self-paced training manner. Finally, we introduce a hierarchical structural causal model (HSCM) to explain the intuition and concept behind our method. Extensive experiments in the cross-sequence, cross-site, and cross-modality semi-supervised domain generalized medical image segmentation settings show the effectiveness of CausalAD and its superiority over the state-of-the-art. The code is available at https://github.com/Senyh/CausalAD.
期刊介绍:
The IEEE Transactions on Medical Imaging (T-MI) is a journal that welcomes the submission of manuscripts focusing on various aspects of medical imaging. The journal encourages the exploration of body structure, morphology, and function through different imaging techniques, including ultrasound, X-rays, magnetic resonance, radionuclides, microwaves, and optical methods. It also promotes contributions related to cell and molecular imaging, as well as all forms of microscopy.
T-MI publishes original research papers that cover a wide range of topics, including but not limited to novel acquisition techniques, medical image processing and analysis, visualization and performance, pattern recognition, machine learning, and other related methods. The journal particularly encourages highly technical studies that offer new perspectives. By emphasizing the unification of medicine, biology, and imaging, T-MI seeks to bridge the gap between instrumentation, hardware, software, mathematics, physics, biology, and medicine by introducing new analysis methods.
While the journal welcomes strong application papers that describe novel methods, it directs papers that focus solely on important applications using medically adopted or well-established methods without significant innovation in methodology to other journals. T-MI is indexed in Pubmed® and Medline®, which are products of the United States National Library of Medicine.