基于层融合策略的多分支编码器聚合网络多模态脑肿瘤分割。

IF 8.9 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Qinghao Liu,Yuehao Zhu,Min Liu,Zhao Yao,Yaonan Wang,Erik Meijering
{"title":"基于层融合策略的多分支编码器聚合网络多模态脑肿瘤分割。","authors":"Qinghao Liu,Yuehao Zhu,Min Liu,Zhao Yao,Yaonan Wang,Erik Meijering","doi":"10.1109/tnnls.2025.3593297","DOIUrl":null,"url":null,"abstract":"Multimodal brain tumor segmentation (BraTS), integrated with surgical robots and navigation systems, enables accurate surgical interventions while maximizing the preservation of surrounding healthy brain tissue. However, multimodal brain scans suffer from large interclass differences in brain tumor subregions and information redundancy, leading to inadequate fusion of multimodal information and significantly affecting the accuracy of BraTS. To address the above problems, we propose a multibranch encoder aggregation (MEA) network based on a layer-fusion strategy called multibranch UNeXt (MBUNeXt). The network comprises three well-designed modules: the multimodal feature attention (MFA) module, the MEA module, and the large-kernel convolution skip (LCS)-connection module. These modules work together to achieve precise segmentation of brain tumors. Specifically, the MFA module preserves the intermodality similarity structure through attention mechanisms and Gaussian modulation functions, thereby filtering redundant information. Then, the MEA module exploits the correlations among multiple modalities to effectively integrate multimodal hybrid feature representation and optimize multimodal information fusion. In addition, the LCS module constructs multiple groups of depthwise separable convolutions with large kernel, which can guide the network to attend to features at different scales, thereby addressing the issue of significant interclass differences in brain tumor subregions. The experimental results on the large-scale public datasets, BraTS2019 and BraTS2021, which consist of approximately 5000 3-D brain scans, demonstrate that our proposed method has achieved SOTA performance, with average Dice scores of 85.84% and 91.11%, respectively. It also performs well on the BraTS-Africa2024 dataset with low imaging quality, confirming its robustness. The code is available at https://github.com/liuqinghao2018/MBUNeXt.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"730 1","pages":""},"PeriodicalIF":8.9000,"publicationDate":"2025-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MBUNeXt: Multibranch Encoder Aggregation Network Based on Layer-Fusion Strategy for Multimodal Brain Tumor Segmentation.\",\"authors\":\"Qinghao Liu,Yuehao Zhu,Min Liu,Zhao Yao,Yaonan Wang,Erik Meijering\",\"doi\":\"10.1109/tnnls.2025.3593297\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Multimodal brain tumor segmentation (BraTS), integrated with surgical robots and navigation systems, enables accurate surgical interventions while maximizing the preservation of surrounding healthy brain tissue. However, multimodal brain scans suffer from large interclass differences in brain tumor subregions and information redundancy, leading to inadequate fusion of multimodal information and significantly affecting the accuracy of BraTS. To address the above problems, we propose a multibranch encoder aggregation (MEA) network based on a layer-fusion strategy called multibranch UNeXt (MBUNeXt). The network comprises three well-designed modules: the multimodal feature attention (MFA) module, the MEA module, and the large-kernel convolution skip (LCS)-connection module. These modules work together to achieve precise segmentation of brain tumors. Specifically, the MFA module preserves the intermodality similarity structure through attention mechanisms and Gaussian modulation functions, thereby filtering redundant information. Then, the MEA module exploits the correlations among multiple modalities to effectively integrate multimodal hybrid feature representation and optimize multimodal information fusion. In addition, the LCS module constructs multiple groups of depthwise separable convolutions with large kernel, which can guide the network to attend to features at different scales, thereby addressing the issue of significant interclass differences in brain tumor subregions. The experimental results on the large-scale public datasets, BraTS2019 and BraTS2021, which consist of approximately 5000 3-D brain scans, demonstrate that our proposed method has achieved SOTA performance, with average Dice scores of 85.84% and 91.11%, respectively. It also performs well on the BraTS-Africa2024 dataset with low imaging quality, confirming its robustness. The code is available at https://github.com/liuqinghao2018/MBUNeXt.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"730 1\",\"pages\":\"\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-08-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/tnnls.2025.3593297\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/tnnls.2025.3593297","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多模式脑肿瘤分割(BraTS)与手术机器人和导航系统相结合,可以实现准确的手术干预,同时最大限度地保护周围的健康脑组织。然而,多模态脑部扫描在脑肿瘤亚区存在较大的类间差异和信息冗余,导致多模态信息融合不足,严重影响brat的准确性。为了解决上述问题,我们提出了一种基于层融合策略的多分支编码器聚合(MEA)网络,称为多分支UNeXt (MBUNeXt)。该网络包括三个精心设计的模块:多模态特征注意(MFA)模块、MEA模块和大核卷积跳过(LCS)连接模块。这些模块一起工作以实现脑肿瘤的精确分割。具体来说,MFA模块通过注意机制和高斯调制函数来保留模间相似结构,从而过滤冗余信息。然后,MEA模块利用多模态之间的相关性,有效集成多模态混合特征表示,优化多模态信息融合。此外,LCS模块构建了多组具有大核的深度可分离卷积,可以引导网络关注不同尺度的特征,从而解决脑肿瘤亚区显著的类间差异问题。在大规模公共数据集BraTS2019和BraTS2021上的实验结果表明,我们提出的方法达到了SOTA性能,平均Dice得分分别为85.84%和91.11%。它在低成像质量的BraTS-Africa2024数据集上也表现良好,证实了它的鲁棒性。代码可在https://github.com/liuqinghao2018/MBUNeXt上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
MBUNeXt: Multibranch Encoder Aggregation Network Based on Layer-Fusion Strategy for Multimodal Brain Tumor Segmentation.
Multimodal brain tumor segmentation (BraTS), integrated with surgical robots and navigation systems, enables accurate surgical interventions while maximizing the preservation of surrounding healthy brain tissue. However, multimodal brain scans suffer from large interclass differences in brain tumor subregions and information redundancy, leading to inadequate fusion of multimodal information and significantly affecting the accuracy of BraTS. To address the above problems, we propose a multibranch encoder aggregation (MEA) network based on a layer-fusion strategy called multibranch UNeXt (MBUNeXt). The network comprises three well-designed modules: the multimodal feature attention (MFA) module, the MEA module, and the large-kernel convolution skip (LCS)-connection module. These modules work together to achieve precise segmentation of brain tumors. Specifically, the MFA module preserves the intermodality similarity structure through attention mechanisms and Gaussian modulation functions, thereby filtering redundant information. Then, the MEA module exploits the correlations among multiple modalities to effectively integrate multimodal hybrid feature representation and optimize multimodal information fusion. In addition, the LCS module constructs multiple groups of depthwise separable convolutions with large kernel, which can guide the network to attend to features at different scales, thereby addressing the issue of significant interclass differences in brain tumor subregions. The experimental results on the large-scale public datasets, BraTS2019 and BraTS2021, which consist of approximately 5000 3-D brain scans, demonstrate that our proposed method has achieved SOTA performance, with average Dice scores of 85.84% and 91.11%, respectively. It also performs well on the BraTS-Africa2024 dataset with low imaging quality, confirming its robustness. The code is available at https://github.com/liuqinghao2018/MBUNeXt.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE transactions on neural networks and learning systems
IEEE transactions on neural networks and learning systems COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE-COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
CiteScore
23.80
自引率
9.60%
发文量
2102
审稿时长
3-8 weeks
期刊介绍: The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信