SAFFusion:一种用于多模态医学图像融合的显著性感知频率融合网络。

IF 3.2 2区 医学 Q2 BIOCHEMICAL RESEARCH METHODS
Biomedical optics express Pub Date : 2025-05-27 eCollection Date: 2025-06-01 DOI:10.1364/BOE.555458
Renhe Liu, Yu Liu, Han Wang, Junxian Li, Kai Hu
{"title":"SAFFusion:一种用于多模态医学图像融合的显著性感知频率融合网络。","authors":"Renhe Liu, Yu Liu, Han Wang, Junxian Li, Kai Hu","doi":"10.1364/BOE.555458","DOIUrl":null,"url":null,"abstract":"<p><p>Medical image fusion integrates complementary information from multimodal medical images to provide comprehensive references for clinical decision-making, such as the diagnosis of Alzheimer's disease and the detection and segmentation of brain tumors. Although traditional and deep learning-based fusion methods have been extensively studied, they often fail to devise targeted strategies that fully utilize distinct regional or feature-specific information. This paper proposes SAFFusion, a saliency-aware frequency fusion network that integrates intensity and texture cues from multimodal medical images. We first introduce Mamba-UNet, a multiscale encoder-decoder architecture enhanced by the Mamba design, to improve global modeling in feature extraction and image reconstruction. By employing the contourlet transform in Mamba-UNet, we replace conventional pooling with multiscale representations and decompose spatial features into high- and low-frequency subbands. A dual-branch frequency feature fusion module then fuses cross-modality information according to the distinct characteristics of these frequency subbands. Furthermore, we apply latent low-rank representation (LatLRR) to assess image saliency and implement adaptive loss constraints to preserve information in salient and non-salient regions. Quantitative results on CT/MRI, SPECT/MRI, and PET/MRI fusion tasks show that SAFFusion outperforms state-of-the-art methods. Qualitative evaluations confirm that SAFFusion effectively merges prominent intensity features and rich textures from multiple sources.</p>","PeriodicalId":8969,"journal":{"name":"Biomedical optics express","volume":"16 6","pages":"2459-2481"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12265500/pdf/","citationCount":"0","resultStr":"{\"title\":\"SAFFusion: a saliency-aware frequency fusion network for multimodal medical image fusion.\",\"authors\":\"Renhe Liu, Yu Liu, Han Wang, Junxian Li, Kai Hu\",\"doi\":\"10.1364/BOE.555458\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Medical image fusion integrates complementary information from multimodal medical images to provide comprehensive references for clinical decision-making, such as the diagnosis of Alzheimer's disease and the detection and segmentation of brain tumors. Although traditional and deep learning-based fusion methods have been extensively studied, they often fail to devise targeted strategies that fully utilize distinct regional or feature-specific information. This paper proposes SAFFusion, a saliency-aware frequency fusion network that integrates intensity and texture cues from multimodal medical images. We first introduce Mamba-UNet, a multiscale encoder-decoder architecture enhanced by the Mamba design, to improve global modeling in feature extraction and image reconstruction. By employing the contourlet transform in Mamba-UNet, we replace conventional pooling with multiscale representations and decompose spatial features into high- and low-frequency subbands. A dual-branch frequency feature fusion module then fuses cross-modality information according to the distinct characteristics of these frequency subbands. Furthermore, we apply latent low-rank representation (LatLRR) to assess image saliency and implement adaptive loss constraints to preserve information in salient and non-salient regions. Quantitative results on CT/MRI, SPECT/MRI, and PET/MRI fusion tasks show that SAFFusion outperforms state-of-the-art methods. Qualitative evaluations confirm that SAFFusion effectively merges prominent intensity features and rich textures from multiple sources.</p>\",\"PeriodicalId\":8969,\"journal\":{\"name\":\"Biomedical optics express\",\"volume\":\"16 6\",\"pages\":\"2459-2481\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12265500/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Biomedical optics express\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1364/BOE.555458\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/6/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"BIOCHEMICAL RESEARCH METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical optics express","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1364/BOE.555458","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"BIOCHEMICAL RESEARCH METHODS","Score":null,"Total":0}
引用次数: 0

摘要

医学图像融合将多模态医学图像的互补信息进行融合,为临床决策提供综合参考,如阿尔茨海默病的诊断、脑肿瘤的检测与分割等。尽管传统的和基于深度学习的融合方法已经得到了广泛的研究,但它们往往不能设计出充分利用不同区域或特定特征信息的有针对性的策略。本文提出了SAFFusion,这是一个显著性感知的频率融合网络,它集成了来自多模态医学图像的强度和纹理线索。我们首先引入Mamba- unet,这是一种由Mamba设计增强的多尺度编码器-解码器架构,用于改进特征提取和图像重建中的全局建模。利用Mamba-UNet中的contourlet变换,用多尺度表示代替传统的池化,将空间特征分解为高、低频子带。然后,双分支频率特征融合模块根据这些频率子带的不同特征融合交叉模态信息。此外,我们应用潜在低秩表示(LatLRR)来评估图像的显著性,并实现自适应损失约束来保留显著和非显著区域的信息。CT/MRI, SPECT/MRI和PET/MRI融合任务的定量结果表明,SAFFusion优于最先进的方法。定性评价证实,SAFFusion有效地融合了突出的强度特征和丰富的纹理从多个来源。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SAFFusion: a saliency-aware frequency fusion network for multimodal medical image fusion.

Medical image fusion integrates complementary information from multimodal medical images to provide comprehensive references for clinical decision-making, such as the diagnosis of Alzheimer's disease and the detection and segmentation of brain tumors. Although traditional and deep learning-based fusion methods have been extensively studied, they often fail to devise targeted strategies that fully utilize distinct regional or feature-specific information. This paper proposes SAFFusion, a saliency-aware frequency fusion network that integrates intensity and texture cues from multimodal medical images. We first introduce Mamba-UNet, a multiscale encoder-decoder architecture enhanced by the Mamba design, to improve global modeling in feature extraction and image reconstruction. By employing the contourlet transform in Mamba-UNet, we replace conventional pooling with multiscale representations and decompose spatial features into high- and low-frequency subbands. A dual-branch frequency feature fusion module then fuses cross-modality information according to the distinct characteristics of these frequency subbands. Furthermore, we apply latent low-rank representation (LatLRR) to assess image saliency and implement adaptive loss constraints to preserve information in salient and non-salient regions. Quantitative results on CT/MRI, SPECT/MRI, and PET/MRI fusion tasks show that SAFFusion outperforms state-of-the-art methods. Qualitative evaluations confirm that SAFFusion effectively merges prominent intensity features and rich textures from multiple sources.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Biomedical optics express
Biomedical optics express BIOCHEMICAL RESEARCH METHODS-OPTICS
CiteScore
6.80
自引率
11.80%
发文量
633
审稿时长
1 months
期刊介绍: The journal''s scope encompasses fundamental research, technology development, biomedical studies and clinical applications. BOEx focuses on the leading edge topics in the field, including: Tissue optics and spectroscopy Novel microscopies Optical coherence tomography Diffuse and fluorescence tomography Photoacoustic and multimodal imaging Molecular imaging and therapies Nanophotonic biosensing Optical biophysics/photobiology Microfluidic optical devices Vision research.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信