{"title":"difff - cffbnet:用于脑肿瘤分割的弥散嵌入式跨层特征融合桥网络","authors":"Xiaosheng Wu, Qingyi Hou, Chaosheng Tang, Shuihua Wang, Junding Sun, Yudong Zhang","doi":"10.1002/ima.70088","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>This study introduces the Diff-CFFBNet, a novel network for brain tumor segmentation designed to address the challenges of misdetection in broken tumor regions within MRI scans, which is crucial for early diagnosis, treatment planning, and disease monitoring. The proposed method incorporates a cross-layer feature fusion bridge (CFFB) to enhance feature interaction and a cross-layer feature fusion U-Net (CFFU-Net) to reduce the semantic gap in diffusion models. Additionally, a sampling-quantity-based fusion (SQ-Fusion) is utilized to leverage the uncertainty of diffusion models for improved segmentation outcomes. Experimental validation on BraTS 2019, BraTS 2020, TCGA-GBM, TCGA-LGG, and MSD datasets demonstrates that Diff-CFFBNet outperforms existing methods, achieving superior performance in terms of Dice score, HD95, and mIoU metrics. These results indicate the model's robustness and precision, even under challenging conditions with complex tumor structures. Diff-CFFBNet provides a reliable solution for accurate and efficient brain tumor segmentation in medical imaging, with the potential for clinical application in treatment planning and disease monitoring. Future work aims to extend this approach to multiple tumor types and refine diffusion model applications in medical image segmentation.</p>\n </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"35 3","pages":""},"PeriodicalIF":3.0000,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Diff-CFFBNet: Diffusion-Embedded Cross-Layer Feature Fusion Bridge Network for Brain Tumor Segmentation\",\"authors\":\"Xiaosheng Wu, Qingyi Hou, Chaosheng Tang, Shuihua Wang, Junding Sun, Yudong Zhang\",\"doi\":\"10.1002/ima.70088\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n <p>This study introduces the Diff-CFFBNet, a novel network for brain tumor segmentation designed to address the challenges of misdetection in broken tumor regions within MRI scans, which is crucial for early diagnosis, treatment planning, and disease monitoring. The proposed method incorporates a cross-layer feature fusion bridge (CFFB) to enhance feature interaction and a cross-layer feature fusion U-Net (CFFU-Net) to reduce the semantic gap in diffusion models. Additionally, a sampling-quantity-based fusion (SQ-Fusion) is utilized to leverage the uncertainty of diffusion models for improved segmentation outcomes. Experimental validation on BraTS 2019, BraTS 2020, TCGA-GBM, TCGA-LGG, and MSD datasets demonstrates that Diff-CFFBNet outperforms existing methods, achieving superior performance in terms of Dice score, HD95, and mIoU metrics. These results indicate the model's robustness and precision, even under challenging conditions with complex tumor structures. Diff-CFFBNet provides a reliable solution for accurate and efficient brain tumor segmentation in medical imaging, with the potential for clinical application in treatment planning and disease monitoring. Future work aims to extend this approach to multiple tumor types and refine diffusion model applications in medical image segmentation.</p>\\n </div>\",\"PeriodicalId\":14027,\"journal\":{\"name\":\"International Journal of Imaging Systems and Technology\",\"volume\":\"35 3\",\"pages\":\"\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-04-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Imaging Systems and Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/ima.70088\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, ELECTRICAL & ELECTRONIC\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Imaging Systems and Technology","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ima.70088","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
This study introduces the Diff-CFFBNet, a novel network for brain tumor segmentation designed to address the challenges of misdetection in broken tumor regions within MRI scans, which is crucial for early diagnosis, treatment planning, and disease monitoring. The proposed method incorporates a cross-layer feature fusion bridge (CFFB) to enhance feature interaction and a cross-layer feature fusion U-Net (CFFU-Net) to reduce the semantic gap in diffusion models. Additionally, a sampling-quantity-based fusion (SQ-Fusion) is utilized to leverage the uncertainty of diffusion models for improved segmentation outcomes. Experimental validation on BraTS 2019, BraTS 2020, TCGA-GBM, TCGA-LGG, and MSD datasets demonstrates that Diff-CFFBNet outperforms existing methods, achieving superior performance in terms of Dice score, HD95, and mIoU metrics. These results indicate the model's robustness and precision, even under challenging conditions with complex tumor structures. Diff-CFFBNet provides a reliable solution for accurate and efficient brain tumor segmentation in medical imaging, with the potential for clinical application in treatment planning and disease monitoring. Future work aims to extend this approach to multiple tumor types and refine diffusion model applications in medical image segmentation.
期刊介绍:
The International Journal of Imaging Systems and Technology (IMA) is a forum for the exchange of ideas and results relevant to imaging systems, including imaging physics and informatics. The journal covers all imaging modalities in humans and animals.
IMA accepts technically sound and scientifically rigorous research in the interdisciplinary field of imaging, including relevant algorithmic research and hardware and software development, and their applications relevant to medical research. The journal provides a platform to publish original research in structural and functional imaging.
The journal is also open to imaging studies of the human body and on animals that describe novel diagnostic imaging and analyses methods. Technical, theoretical, and clinical research in both normal and clinical populations is encouraged. Submissions describing methods, software, databases, replication studies as well as negative results are also considered.
The scope of the journal includes, but is not limited to, the following in the context of biomedical research:
Imaging and neuro-imaging modalities: structural MRI, functional MRI, PET, SPECT, CT, ultrasound, EEG, MEG, NIRS etc.;
Neuromodulation and brain stimulation techniques such as TMS and tDCS;
Software and hardware for imaging, especially related to human and animal health;
Image segmentation in normal and clinical populations;
Pattern analysis and classification using machine learning techniques;
Computational modeling and analysis;
Brain connectivity and connectomics;
Systems-level characterization of brain function;
Neural networks and neurorobotics;
Computer vision, based on human/animal physiology;
Brain-computer interface (BCI) technology;
Big data, databasing and data mining.