FUNet:用于脑肿瘤分割的信道多模态融合及不确定区域调整网络

IF 15.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yu Yan , Lei Zhang , Jiayi Li , Leyi Zhang , Zhang Yi
{"title":"FUNet:用于脑肿瘤分割的信道多模态融合及不确定区域调整网络","authors":"Yu Yan ,&nbsp;Lei Zhang ,&nbsp;Jiayi Li ,&nbsp;Leyi Zhang ,&nbsp;Zhang Yi","doi":"10.1016/j.inffus.2025.103474","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modal images are crucial for enhancing the performance of brain tumor segmentation. Existing multi-modal brain tumor segmentation methods have the following three main shortcomings: To begin with, framework design remains underexplored in current research. Secondly, effectively fusing multi-modal data, which characterize brain tumors differently, poses a significant challenge. Finally, uncertain and error-prone regions may exist within the fused features, complicating subsequent analysis. To address these issues, we propose Frequency Channel Multi-Modal Fusion and Uncertain Region Adjustment Network (FUNet). FUNet employs a triple-parallel-stream framework to integrate multi-modal information. In the encoder of the multi-modal information learning stream, we design a frequency channel multi-modal fusion module (FCMM), which distinguishes between the complementarity and redundancy of the modal information and mines the intrinsic connection. Additionally, in the decoder, we design an uncertain region adjustment module (URAM), which generates an adjustment factor to enable pixel-wise adjust uncertain error-prone regions existing in the fused features. Experiments on BrsTS 2018 and BraTS-PED 2023 demonstrate that our method achieves better results than other state-of-the-art methods.</div></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"125 ","pages":"Article 103474"},"PeriodicalIF":15.5000,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FUNet: Frequency Channel Multi-Modal Fusion and Uncertain Region Adjustment Network for brain tumor segmentation\",\"authors\":\"Yu Yan ,&nbsp;Lei Zhang ,&nbsp;Jiayi Li ,&nbsp;Leyi Zhang ,&nbsp;Zhang Yi\",\"doi\":\"10.1016/j.inffus.2025.103474\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Multi-modal images are crucial for enhancing the performance of brain tumor segmentation. Existing multi-modal brain tumor segmentation methods have the following three main shortcomings: To begin with, framework design remains underexplored in current research. Secondly, effectively fusing multi-modal data, which characterize brain tumors differently, poses a significant challenge. Finally, uncertain and error-prone regions may exist within the fused features, complicating subsequent analysis. To address these issues, we propose Frequency Channel Multi-Modal Fusion and Uncertain Region Adjustment Network (FUNet). FUNet employs a triple-parallel-stream framework to integrate multi-modal information. In the encoder of the multi-modal information learning stream, we design a frequency channel multi-modal fusion module (FCMM), which distinguishes between the complementarity and redundancy of the modal information and mines the intrinsic connection. Additionally, in the decoder, we design an uncertain region adjustment module (URAM), which generates an adjustment factor to enable pixel-wise adjust uncertain error-prone regions existing in the fused features. Experiments on BrsTS 2018 and BraTS-PED 2023 demonstrate that our method achieves better results than other state-of-the-art methods.</div></div>\",\"PeriodicalId\":50367,\"journal\":{\"name\":\"Information Fusion\",\"volume\":\"125 \",\"pages\":\"Article 103474\"},\"PeriodicalIF\":15.5000,\"publicationDate\":\"2025-07-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Fusion\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1566253525005470\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Fusion","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1566253525005470","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

多模态图像是提高脑肿瘤分割性能的关键。现有的多模态脑肿瘤分割方法存在以下三个主要缺点:首先,目前的研究对框架设计的探索还不够。其次,有效融合具有不同脑肿瘤特征的多模态数据是一个重大挑战。最后,融合特征中可能存在不确定和易出错的区域,使后续分析复杂化。为了解决这些问题,我们提出了信道多模态融合和不确定区域调整网络(FUNet)。FUNet采用三并行流框架来整合多模态信息。在多模态信息学习流的编码器中,设计了信道多模态融合模块(FCMM),该模块能够区分模态信息的互补性和冗余性,挖掘模态信息的内在联系。此外,在解码器中,我们设计了一个不确定区域调整模块(URAM),该模块产生一个调整因子,能够逐像素地调整融合特征中存在的不确定易出错区域。在BrsTS 2018和BraTS-PED 2023上的实验表明,我们的方法比其他最先进的方法取得了更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

FUNet: Frequency Channel Multi-Modal Fusion and Uncertain Region Adjustment Network for brain tumor segmentation

FUNet: Frequency Channel Multi-Modal Fusion and Uncertain Region Adjustment Network for brain tumor segmentation
Multi-modal images are crucial for enhancing the performance of brain tumor segmentation. Existing multi-modal brain tumor segmentation methods have the following three main shortcomings: To begin with, framework design remains underexplored in current research. Secondly, effectively fusing multi-modal data, which characterize brain tumors differently, poses a significant challenge. Finally, uncertain and error-prone regions may exist within the fused features, complicating subsequent analysis. To address these issues, we propose Frequency Channel Multi-Modal Fusion and Uncertain Region Adjustment Network (FUNet). FUNet employs a triple-parallel-stream framework to integrate multi-modal information. In the encoder of the multi-modal information learning stream, we design a frequency channel multi-modal fusion module (FCMM), which distinguishes between the complementarity and redundancy of the modal information and mines the intrinsic connection. Additionally, in the decoder, we design an uncertain region adjustment module (URAM), which generates an adjustment factor to enable pixel-wise adjust uncertain error-prone regions existing in the fused features. Experiments on BrsTS 2018 and BraTS-PED 2023 demonstrate that our method achieves better results than other state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Fusion
Information Fusion 工程技术-计算机:理论方法
CiteScore
33.20
自引率
4.30%
发文量
161
审稿时长
7.9 months
期刊介绍: Information Fusion serves as a central platform for showcasing advancements in multi-sensor, multi-source, multi-process information fusion, fostering collaboration among diverse disciplines driving its progress. It is the leading outlet for sharing research and development in this field, focusing on architectures, algorithms, and applications. Papers dealing with fundamental theoretical analyses as well as those demonstrating their application to real-world problems will be welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信