Tongxue Zhou , Zheng Wang , Xiaohui Liu , Weibo Liu , Shan Zhu
{"title":"Learning deep feature representations for multi-modal MR brain tumor segmentation","authors":"Tongxue Zhou , Zheng Wang , Xiaohui Liu , Weibo Liu , Shan Zhu","doi":"10.1016/j.neucom.2025.130162","DOIUrl":null,"url":null,"abstract":"<div><div>Brain tumor segmentation is crucial for accurate diagnosis, treatment planning, and patient monitoring. Different MRI sequences can provide unique and complementary information about various aspects of brain tumors. However, effectively integrating diverse data sources to achieve accurate segmentation remains a significant challenge due to the inherent complexity and variability of the data. To address this challenge, this paper proposes a deep learning framework designed to fuse multi-modal MRI data and enhance brain tumor segmentation accuracy. Specifically, the framework introduces two innovative modules: the modality-wise feature fusion module (MFFM) and the spatial and channel-wise feature fusion module (SCFFM). The MFFM aims to learn modality-specific features and integrate information from diverse modalities, thereby ensuring richer and more discriminative feature representations. Meanwhile, the SCFFM is designed to capture contextual information and achieve multi-channel data incorporation by emphasizing informative regions and highlighting critical features. Together, these modules collaboratively enhance the model’s capacity for feature learning, leading to more precise tumor segmentation. Experimental validation on two public datasets demonstrates the effectiveness of the proposed approach, achieving an average Dice similarity coefficient of 83.2% with an average 95% Hausdorff distance of 4.3 mm on the BraTS 2018 dataset, and an average Dice similarity coefficient of 82.9% with an average 95% Hausdorff distance of 5.5 mm on the BraTS 2019 dataset. This framework not only presents an effective method for precise multi-modal brain tumor segmentation but also provides a promising solution for other challenges in multi-modal data fusion.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"638 ","pages":"Article 130162"},"PeriodicalIF":5.5000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neurocomputing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0925231225008343","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Brain tumor segmentation is crucial for accurate diagnosis, treatment planning, and patient monitoring. Different MRI sequences can provide unique and complementary information about various aspects of brain tumors. However, effectively integrating diverse data sources to achieve accurate segmentation remains a significant challenge due to the inherent complexity and variability of the data. To address this challenge, this paper proposes a deep learning framework designed to fuse multi-modal MRI data and enhance brain tumor segmentation accuracy. Specifically, the framework introduces two innovative modules: the modality-wise feature fusion module (MFFM) and the spatial and channel-wise feature fusion module (SCFFM). The MFFM aims to learn modality-specific features and integrate information from diverse modalities, thereby ensuring richer and more discriminative feature representations. Meanwhile, the SCFFM is designed to capture contextual information and achieve multi-channel data incorporation by emphasizing informative regions and highlighting critical features. Together, these modules collaboratively enhance the model’s capacity for feature learning, leading to more precise tumor segmentation. Experimental validation on two public datasets demonstrates the effectiveness of the proposed approach, achieving an average Dice similarity coefficient of 83.2% with an average 95% Hausdorff distance of 4.3 mm on the BraTS 2018 dataset, and an average Dice similarity coefficient of 82.9% with an average 95% Hausdorff distance of 5.5 mm on the BraTS 2019 dataset. This framework not only presents an effective method for precise multi-modal brain tumor segmentation but also provides a promising solution for other challenges in multi-modal data fusion.
期刊介绍:
Neurocomputing publishes articles describing recent fundamental contributions in the field of neurocomputing. Neurocomputing theory, practice and applications are the essential topics being covered.