C2IENet:基于对比约束特征和信息交换的多分支医学影像融合

IF 3.5 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS
Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian
{"title":"C2IENet:基于对比约束特征和信息交换的多分支医学影像融合","authors":"Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian","doi":"10.1007/s00530-024-01473-y","DOIUrl":null,"url":null,"abstract":"<p>In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and <span>\\(\\text {Q}^{\\text {AB/F}}\\)</span>, with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.</p>","PeriodicalId":51138,"journal":{"name":"Multimedia Systems","volume":"19 1","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"C2IENet: Multi-branch medical image fusion based on contrastive constraint features and information exchange\",\"authors\":\"Jing Di, Chan Liang, Li Ren, Wenqing Guo, Jizhao Liu, Jing Lian\",\"doi\":\"10.1007/s00530-024-01473-y\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and <span>\\\\(\\\\text {Q}^{\\\\text {AB/F}}\\\\)</span>, with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.</p>\",\"PeriodicalId\":51138,\"journal\":{\"name\":\"Multimedia Systems\",\"volume\":\"19 1\",\"pages\":\"\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2024-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Multimedia Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s00530-024-01473-y\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Multimedia Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s00530-024-01473-y","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

摘要

在医学图像融合领域,传统方法往往无法区分每张原始图像的独特特征,导致融合后的图像纹理和结构清晰度大打折扣。针对这一问题,我们引入了一种先进的多分支融合方法,其特点是对比度增强特征和交互式信息交换。该方法将多尺度残差模块和梯度密集模块整合到一个专用分支中,以精确提取和丰富单个原始图像的纹理细节。与此同时,配备了信息交互模块的通用特征提取分支会处理配对的原始图像,以协同捕捉跨模态的互补和共享功能信息。此外,我们还为私人和公共分支量身定制了一种复杂的关注机制,以加强全局特征提取,从而显著改善融合图像的对比度和轮廓定义。新颖的相关一致性损失函数通过优化各模态之间的信息共享,进一步完善了融合过程,促进了跨模态基本特征之间的相关性,同时最大限度地降低了不同模态之间高频细节的相关性。客观评估表明,EN、MI、QMI、SSIM、AG、SF 和 (text {Q}^{text {AB/F}}/)等指数有了显著改善,平均增幅分别为 23.67%、12.35%、4.22%、20.81%、8.96%、6.38% 和 25.36%。这些结果表明,与传统算法相比,我们的方法在增强融合图像的纹理细节和对比度方面更具优势,主观评估和客观性能指标都验证了这一点。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

C2IENet: Multi-branch medical image fusion based on contrastive constraint features and information exchange

C2IENet: Multi-branch medical image fusion based on contrastive constraint features and information exchange

In the field of medical image fusion, traditional approaches often fail to differentiate between the unique characteristics of each raw image, leading to fused images with compromised texture and structural clarity. Addressing this, we introduce an advanced multi-branch fusion method characterized by contrast-enhanced features and interactive information exchange. This method integrates a multi-scale residual module and a gradient-dense module within a private branch to precisely extract and enrich texture details from individual raw images. In parallel, a common feature extraction branch, equipped with an information interaction module, processes paired raw images to synergistically capture complementary and shared functional information across modalities. Additionally, we implement a sophisticated attention mechanism tailored for both the private and public branches to enhance global feature extraction, thereby significantly improving the contrast and contour definition of the fused image. A novel correlation consistency loss function further refines the fusion process by optimizing the information sharing between modalities, promoting the correlation among basic cross-modal features while minimizing the correlation of high-frequency details across different modalities. Objective evaluations demonstrate substantial improvements in indices such as EN, MI, QMI, SSIM, AG, SF, and \(\text {Q}^{\text {AB/F}}\), with average increases of 23.67%, 12.35%, 4.22%, 20.81%, 8.96%, 6.38%, and 25.36%, respectively. These results underscore our method’s superiority in achieving enhanced texture detail and contrast in fused images compared to conventional algorithms, as validated by both subjective assessments and objective performance metrics.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Multimedia Systems
Multimedia Systems 工程技术-计算机:理论方法
CiteScore
5.40
自引率
7.70%
发文量
148
审稿时长
4.5 months
期刊介绍: This journal details innovative research ideas, emerging technologies, state-of-the-art methods and tools in all aspects of multimedia computing, communication, storage, and applications. It features theoretical, experimental, and survey articles.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信