Xiaoyu Liu,Linhao Qu,Ziyue Xie,Yonghong Shi,Zhijian Song
{"title":"基于部分标记数据集的深度互学习多器官分割。","authors":"Xiaoyu Liu,Linhao Qu,Ziyue Xie,Yonghong Shi,Zhijian Song","doi":"10.1109/tmi.2025.3614853","DOIUrl":null,"url":null,"abstract":"Labeling multiple organs for segmentation is a complex and time-consuming process, resulting in a scarcity of comprehensively labeled multi-organ datasets while the emergence of numerous partially labeled datasets. Current methods face three critical limitations: incomplete exploitation of available supervision; complex inference, and insufficient validation of generalization capabilities. This paper proposes a new framework based on mutual learning, aiming to improve multi-organ segmentation performance by complementing information among partially labeled datasets. Specifically, this method consists of three key components: (1) partial-organ segmentation models training with Difference Mutual Learning, (2) pseudo-label generation and filtering, and (3) full-organ segmentation models training enhanced by Similarity Mutual Learning. Difference Mutual Learning enables each partial-organ segmentation model to utilize labels and features from other datasets as complementary signals, improving cross-dataset organ detection for better pseudo labels. Similarity Mutual Learning augments each full-organ segmentation model training with two additional supervision sources: inter-dataset ground truths and dynamic reliable transferred features, significantly boosting segmentation accuracy. The model obtained by this method achieves both high accuracy and efficient inference for multi-organ segmentation. Extensive experiments conducted on nine datasets spanning the head-neck, chest, abdomen, and pelvis demonstrate that the proposed method achieves SOTA performance.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"77 1","pages":""},"PeriodicalIF":9.8000,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation.\",\"authors\":\"Xiaoyu Liu,Linhao Qu,Ziyue Xie,Yonghong Shi,Zhijian Song\",\"doi\":\"10.1109/tmi.2025.3614853\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Labeling multiple organs for segmentation is a complex and time-consuming process, resulting in a scarcity of comprehensively labeled multi-organ datasets while the emergence of numerous partially labeled datasets. Current methods face three critical limitations: incomplete exploitation of available supervision; complex inference, and insufficient validation of generalization capabilities. This paper proposes a new framework based on mutual learning, aiming to improve multi-organ segmentation performance by complementing information among partially labeled datasets. Specifically, this method consists of three key components: (1) partial-organ segmentation models training with Difference Mutual Learning, (2) pseudo-label generation and filtering, and (3) full-organ segmentation models training enhanced by Similarity Mutual Learning. Difference Mutual Learning enables each partial-organ segmentation model to utilize labels and features from other datasets as complementary signals, improving cross-dataset organ detection for better pseudo labels. Similarity Mutual Learning augments each full-organ segmentation model training with two additional supervision sources: inter-dataset ground truths and dynamic reliable transferred features, significantly boosting segmentation accuracy. The model obtained by this method achieves both high accuracy and efficient inference for multi-organ segmentation. Extensive experiments conducted on nine datasets spanning the head-neck, chest, abdomen, and pelvis demonstrate that the proposed method achieves SOTA performance.\",\"PeriodicalId\":13418,\"journal\":{\"name\":\"IEEE Transactions on Medical Imaging\",\"volume\":\"77 1\",\"pages\":\"\"},\"PeriodicalIF\":9.8000,\"publicationDate\":\"2025-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Medical Imaging\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/tmi.2025.3614853\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Medical Imaging","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/tmi.2025.3614853","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
Deep Mutual Learning among Partially Labeled Datasets for Multi-Organ Segmentation.
Labeling multiple organs for segmentation is a complex and time-consuming process, resulting in a scarcity of comprehensively labeled multi-organ datasets while the emergence of numerous partially labeled datasets. Current methods face three critical limitations: incomplete exploitation of available supervision; complex inference, and insufficient validation of generalization capabilities. This paper proposes a new framework based on mutual learning, aiming to improve multi-organ segmentation performance by complementing information among partially labeled datasets. Specifically, this method consists of three key components: (1) partial-organ segmentation models training with Difference Mutual Learning, (2) pseudo-label generation and filtering, and (3) full-organ segmentation models training enhanced by Similarity Mutual Learning. Difference Mutual Learning enables each partial-organ segmentation model to utilize labels and features from other datasets as complementary signals, improving cross-dataset organ detection for better pseudo labels. Similarity Mutual Learning augments each full-organ segmentation model training with two additional supervision sources: inter-dataset ground truths and dynamic reliable transferred features, significantly boosting segmentation accuracy. The model obtained by this method achieves both high accuracy and efficient inference for multi-organ segmentation. Extensive experiments conducted on nine datasets spanning the head-neck, chest, abdomen, and pelvis demonstrate that the proposed method achieves SOTA performance.
期刊介绍:
The IEEE Transactions on Medical Imaging (T-MI) is a journal that welcomes the submission of manuscripts focusing on various aspects of medical imaging. The journal encourages the exploration of body structure, morphology, and function through different imaging techniques, including ultrasound, X-rays, magnetic resonance, radionuclides, microwaves, and optical methods. It also promotes contributions related to cell and molecular imaging, as well as all forms of microscopy.
T-MI publishes original research papers that cover a wide range of topics, including but not limited to novel acquisition techniques, medical image processing and analysis, visualization and performance, pattern recognition, machine learning, and other related methods. The journal particularly encourages highly technical studies that offer new perspectives. By emphasizing the unification of medicine, biology, and imaging, T-MI seeks to bridge the gap between instrumentation, hardware, software, mathematics, physics, biology, and medicine by introducing new analysis methods.
While the journal welcomes strong application papers that describe novel methods, it directs papers that focus solely on important applications using medically adopted or well-established methods without significant innovation in methodology to other journals. T-MI is indexed in Pubmed® and Medline®, which are products of the United States National Library of Medicine.