Tao Lei;Wenbiao Song;Weichuan Zhang;Xiaogang Du;Chenxia Li;Lifeng He;Asoke K. Nandi
{"title":"Semi-Supervised 3-D Medical Image Segmentation Using Multiconsistency Learning With Fuzzy Perception-Guided Target Selection","authors":"Tao Lei;Wenbiao Song;Weichuan Zhang;Xiaogang Du;Chenxia Li;Lifeng He;Asoke K. Nandi","doi":"10.1109/TRPMS.2024.3473929","DOIUrl":null,"url":null,"abstract":"Semi-supervised learning methods based on the mean teacher model have achieved great success in the field of 3-D medical image segmentation. However, most of the existing methods provide auxiliary supervised signals only for reliable regions, but ignore the effect of fuzzy regions from unlabeled data during the process of consistency learning, which results in the loss of more valuable information. Besides, some of these methods only employ multitask learning to improve models’ performance, but ignore the role of consistency learning between tasks and models, thereby weakening geometric shape constraints. To address the above issues, in this article, we propose a semi-supervised 3-D medical image segmentation framework with multiconsistency learning for fuzzy perception-guided target selection. First, we design a fuzzy perception-guided target selection strategy from multiple perspectives and adopt the fusion method of fuzziness minimization and the fuzzy map momentum update to obtain a fuzzy region. By incorporating the fuzzy region into consistency learning, our model can effectively exploit more useful information from the fuzzy region of unlabeled data. Second, we design a multiconsistency learning strategy that employs intratask and intermodal mutual consistency learning as well as cross-model cross-task consistency learning to effectively learn the shape representation of fuzzy regions. The strategy can encourage the model to agree on predictions for different tasks in fuzzy regions. Experiments demonstrate that the proposed framework outperforms the current mainstream methods on two popular 3-D medical datasets, the left atrium segmentation dataset, and the brain tumor segmentation dataset. The code will be released at: <uri>https://github.com/SUST-reynole</uri>.","PeriodicalId":46807,"journal":{"name":"IEEE Transactions on Radiation and Plasma Medical Sciences","volume":"9 4","pages":"421-432"},"PeriodicalIF":4.6000,"publicationDate":"2024-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10706819","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Radiation and Plasma Medical Sciences","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10706819/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0
Abstract
Semi-supervised learning methods based on the mean teacher model have achieved great success in the field of 3-D medical image segmentation. However, most of the existing methods provide auxiliary supervised signals only for reliable regions, but ignore the effect of fuzzy regions from unlabeled data during the process of consistency learning, which results in the loss of more valuable information. Besides, some of these methods only employ multitask learning to improve models’ performance, but ignore the role of consistency learning between tasks and models, thereby weakening geometric shape constraints. To address the above issues, in this article, we propose a semi-supervised 3-D medical image segmentation framework with multiconsistency learning for fuzzy perception-guided target selection. First, we design a fuzzy perception-guided target selection strategy from multiple perspectives and adopt the fusion method of fuzziness minimization and the fuzzy map momentum update to obtain a fuzzy region. By incorporating the fuzzy region into consistency learning, our model can effectively exploit more useful information from the fuzzy region of unlabeled data. Second, we design a multiconsistency learning strategy that employs intratask and intermodal mutual consistency learning as well as cross-model cross-task consistency learning to effectively learn the shape representation of fuzzy regions. The strategy can encourage the model to agree on predictions for different tasks in fuzzy regions. Experiments demonstrate that the proposed framework outperforms the current mainstream methods on two popular 3-D medical datasets, the left atrium segmentation dataset, and the brain tumor segmentation dataset. The code will be released at: https://github.com/SUST-reynole.