{"title":"M2FNet: multi-modality multi-level fusion network for segmentation of acute and sub-acute ischemic stroke","authors":"Shannan Chen, Xuanhe Zhao, Yang Duan, Ronghui Ju, Peizhuo Zang, Shouliang Qi","doi":"10.1007/s40747-025-01861-5","DOIUrl":null,"url":null,"abstract":"<p>Ischemic stroke, a leading cause of death and disability, necessitates accurate detection and automatic segmentation of lesions. While diffusion weight imaging is crucial, its single modality limits the detection of subtle lesions and artifacts. To address this, we propose a multi-modality, multi-level fusion network (M<sup>2</sup>FNet) that aggregates salient features from different modalities across various levels. Our method uses a multi-modal independent encoder to extract modality-specific features from images of different modalities, thereby preserving key details and ensuring rich features. In order to suppress noise while ensuring the best preservation of modality-specific information, we effectively integrate features of different modalities through a cross-modal encoder fusion module. In addition, a cross-modal decoder fusion module and a multi-modality joint loss are designed to further improve the fusion of high-level and low-level features in the up-sampling stage, dynamically utilizing complementary information from multiple modalities to improve segmentation accuracy. To verify the effectiveness of our proposed method, M<sup>2</sup>FNet was validated on two public magnetic resonance imaging ischemic stroke lesion segmentation benchmark datasets. Whether single or multi-modality, M<sup>2</sup>FNet performed better than ten other baseline methods. This highlights the effectiveness of M<sup>2</sup>FNet in multi-modality segmentation of ischemic stroke lesions, making it a promising and powerful quantitative analysis tool for rapid and accurate diagnostic support. The codes of M<sup>2</sup>FNet are available at https://github.com/ShannanChen/MMFNet.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"41 1","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01861-5","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Ischemic stroke, a leading cause of death and disability, necessitates accurate detection and automatic segmentation of lesions. While diffusion weight imaging is crucial, its single modality limits the detection of subtle lesions and artifacts. To address this, we propose a multi-modality, multi-level fusion network (M2FNet) that aggregates salient features from different modalities across various levels. Our method uses a multi-modal independent encoder to extract modality-specific features from images of different modalities, thereby preserving key details and ensuring rich features. In order to suppress noise while ensuring the best preservation of modality-specific information, we effectively integrate features of different modalities through a cross-modal encoder fusion module. In addition, a cross-modal decoder fusion module and a multi-modality joint loss are designed to further improve the fusion of high-level and low-level features in the up-sampling stage, dynamically utilizing complementary information from multiple modalities to improve segmentation accuracy. To verify the effectiveness of our proposed method, M2FNet was validated on two public magnetic resonance imaging ischemic stroke lesion segmentation benchmark datasets. Whether single or multi-modality, M2FNet performed better than ten other baseline methods. This highlights the effectiveness of M2FNet in multi-modality segmentation of ischemic stroke lesions, making it a promising and powerful quantitative analysis tool for rapid and accurate diagnostic support. The codes of M2FNet are available at https://github.com/ShannanChen/MMFNet.
期刊介绍:
Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.