M2FNet: multi-modality multi-level fusion network for segmentation of acute and sub-acute ischemic stroke

IF 4.6 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Shannan Chen, Xuanhe Zhao, Yang Duan, Ronghui Ju, Peizhuo Zang, Shouliang Qi
{"title":"M2FNet: multi-modality multi-level fusion network for segmentation of acute and sub-acute ischemic stroke","authors":"Shannan Chen, Xuanhe Zhao, Yang Duan, Ronghui Ju, Peizhuo Zang, Shouliang Qi","doi":"10.1007/s40747-025-01861-5","DOIUrl":null,"url":null,"abstract":"<p>Ischemic stroke, a leading cause of death and disability, necessitates accurate detection and automatic segmentation of lesions. While diffusion weight imaging is crucial, its single modality limits the detection of subtle lesions and artifacts. To address this, we propose a multi-modality, multi-level fusion network (M<sup>2</sup>FNet) that aggregates salient features from different modalities across various levels. Our method uses a multi-modal independent encoder to extract modality-specific features from images of different modalities, thereby preserving key details and ensuring rich features. In order to suppress noise while ensuring the best preservation of modality-specific information, we effectively integrate features of different modalities through a cross-modal encoder fusion module. In addition, a cross-modal decoder fusion module and a multi-modality joint loss are designed to further improve the fusion of high-level and low-level features in the up-sampling stage, dynamically utilizing complementary information from multiple modalities to improve segmentation accuracy. To verify the effectiveness of our proposed method, M<sup>2</sup>FNet was validated on two public magnetic resonance imaging ischemic stroke lesion segmentation benchmark datasets. Whether single or multi-modality, M<sup>2</sup>FNet performed better than ten other baseline methods. This highlights the effectiveness of M<sup>2</sup>FNet in multi-modality segmentation of ischemic stroke lesions, making it a promising and powerful quantitative analysis tool for rapid and accurate diagnostic support. The codes of M<sup>2</sup>FNet are available at https://github.com/ShannanChen/MMFNet.</p>","PeriodicalId":10524,"journal":{"name":"Complex & Intelligent Systems","volume":"41 1","pages":""},"PeriodicalIF":4.6000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex & Intelligent Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s40747-025-01861-5","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Ischemic stroke, a leading cause of death and disability, necessitates accurate detection and automatic segmentation of lesions. While diffusion weight imaging is crucial, its single modality limits the detection of subtle lesions and artifacts. To address this, we propose a multi-modality, multi-level fusion network (M2FNet) that aggregates salient features from different modalities across various levels. Our method uses a multi-modal independent encoder to extract modality-specific features from images of different modalities, thereby preserving key details and ensuring rich features. In order to suppress noise while ensuring the best preservation of modality-specific information, we effectively integrate features of different modalities through a cross-modal encoder fusion module. In addition, a cross-modal decoder fusion module and a multi-modality joint loss are designed to further improve the fusion of high-level and low-level features in the up-sampling stage, dynamically utilizing complementary information from multiple modalities to improve segmentation accuracy. To verify the effectiveness of our proposed method, M2FNet was validated on two public magnetic resonance imaging ischemic stroke lesion segmentation benchmark datasets. Whether single or multi-modality, M2FNet performed better than ten other baseline methods. This highlights the effectiveness of M2FNet in multi-modality segmentation of ischemic stroke lesions, making it a promising and powerful quantitative analysis tool for rapid and accurate diagnostic support. The codes of M2FNet are available at https://github.com/ShannanChen/MMFNet.

M2FNet:用于急性和亚急性缺血性脑卒中分割的多模态多层次融合网络
缺血性中风是导致死亡和残疾的主要原因,需要对病变进行准确检测和自动分割。虽然弥散加权成像至关重要,但其单一模式限制了细微病变和伪影的检测。为了解决这个问题,我们提出了一个多模态、多层次的融合网络(M2FNet),它聚集了不同层次上不同模态的显著特征。我们的方法使用多模态独立编码器从不同模态的图像中提取模态特定特征,从而保留关键细节并确保丰富的特征。为了抑制噪声,同时确保最好地保存模态特定信息,我们通过跨模态编码器融合模块有效地集成了不同模态的特征。此外,设计了跨模态解码器融合模块和多模态联合损失,进一步提高了上采样阶段高阶和低阶特征的融合,动态利用多模态的互补信息提高分割精度。为了验证该方法的有效性,在两个公开的磁共振成像缺血性脑卒中病灶分割基准数据集上对M2FNet进行了验证。无论是单模式还是多模式,M2FNet的表现都优于其他10种基线方法。这凸显了M2FNet在缺血性脑卒中病变多模态分割中的有效性,使其成为一种有前景且强大的定量分析工具,可为快速准确的诊断提供支持。M2FNet的代码可在https://github.com/ShannanChen/MMFNet上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Complex & Intelligent Systems
Complex & Intelligent Systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-
CiteScore
9.60
自引率
10.30%
发文量
297
期刊介绍: Complex & Intelligent Systems aims to provide a forum for presenting and discussing novel approaches, tools and techniques meant for attaining a cross-fertilization between the broad fields of complex systems, computational simulation, and intelligent analytics and visualization. The transdisciplinary research that the journal focuses on will expand the boundaries of our understanding by investigating the principles and processes that underlie many of the most profound problems facing society today.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信