MMR-Mamba: Multi-modal MRI reconstruction with Mamba and spatial-frequency information fusion

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Jing Zou , Lanqing Liu , Qi Chen , Shujun Wang , Zhanli Hu , Xiaohan Xing , Jing Qin
{"title":"MMR-Mamba: Multi-modal MRI reconstruction with Mamba and spatial-frequency information fusion","authors":"Jing Zou ,&nbsp;Lanqing Liu ,&nbsp;Qi Chen ,&nbsp;Shujun Wang ,&nbsp;Zhanli Hu ,&nbsp;Xiaohan Xing ,&nbsp;Jing Qin","doi":"10.1016/j.media.2025.103549","DOIUrl":null,"url":null,"abstract":"<div><div>Multi-modal MRI offers valuable complementary information for diagnosis and treatment; however, its clinical utility is limited by prolonged scanning time. To accelerate the acquisition process, a practical approach is to reconstruct images of the target modality, which requires longer scanning time, from under-sampled k-space data using the fully-sampled reference modality with shorter scanning time as guidance. The primary challenge of this task lies in comprehensively and efficiently integrating complementary information from different modalities to achieve high-quality reconstruction. Existing methods struggle with this challenge: (1) convolution-based models fail to capture long-range dependencies; (2) transformer-based models, while excelling in global feature modeling, suffer from quadratic computational complexity. To address this dilemma, we propose MMR-Mamba, a novel framework that thoroughly and efficiently integrates multi-modal features for MRI reconstruction, leveraging Mamba’s capability to capture long-range dependencies with linear computational complexity while exploiting global properties of the Fourier domain. Specifically, we first design a <em>Target modality-guided Cross Mamba</em> (TCM) module in the spatial domain, which maximally restores the target modality information by selectively incorporating relevant information from the reference modality. Then, we introduce a <em>Selective Frequency Fusion</em> (SFF) module to efficiently integrate global information in the Fourier domain and recover high-frequency signals for the reconstruction of structural details. Furthermore, we devise an <em>Adaptive Spatial-Frequency Fusion</em> (ASFF) module, which mutually enhances the spatial and frequency domains by supplementing less informative channels from one domain with corresponding channels from the other. Extensive experiments on the BraTS and fastMRI knee datasets demonstrate the superiority of our MMR-Mamba over state-of-the-art reconstruction methods. The code is publicly available at <span><span>https://github.com/zoujing925/MMR-Mamba</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"102 ","pages":"Article 103549"},"PeriodicalIF":10.7000,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525000969","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Multi-modal MRI offers valuable complementary information for diagnosis and treatment; however, its clinical utility is limited by prolonged scanning time. To accelerate the acquisition process, a practical approach is to reconstruct images of the target modality, which requires longer scanning time, from under-sampled k-space data using the fully-sampled reference modality with shorter scanning time as guidance. The primary challenge of this task lies in comprehensively and efficiently integrating complementary information from different modalities to achieve high-quality reconstruction. Existing methods struggle with this challenge: (1) convolution-based models fail to capture long-range dependencies; (2) transformer-based models, while excelling in global feature modeling, suffer from quadratic computational complexity. To address this dilemma, we propose MMR-Mamba, a novel framework that thoroughly and efficiently integrates multi-modal features for MRI reconstruction, leveraging Mamba’s capability to capture long-range dependencies with linear computational complexity while exploiting global properties of the Fourier domain. Specifically, we first design a Target modality-guided Cross Mamba (TCM) module in the spatial domain, which maximally restores the target modality information by selectively incorporating relevant information from the reference modality. Then, we introduce a Selective Frequency Fusion (SFF) module to efficiently integrate global information in the Fourier domain and recover high-frequency signals for the reconstruction of structural details. Furthermore, we devise an Adaptive Spatial-Frequency Fusion (ASFF) module, which mutually enhances the spatial and frequency domains by supplementing less informative channels from one domain with corresponding channels from the other. Extensive experiments on the BraTS and fastMRI knee datasets demonstrate the superiority of our MMR-Mamba over state-of-the-art reconstruction methods. The code is publicly available at https://github.com/zoujing925/MMR-Mamba.
曼巴核磁共振:曼巴和空间频率信息融合的多模态MRI重建
多模态MRI为诊断和治疗提供了有价值的补充信息;然而,由于扫描时间长,其临床应用受到限制。为了加速获取过程,一种实用的方法是使用全采样参考模态,以较短的扫描时间为指导,从欠采样k空间数据中重建需要较长扫描时间的目标模态图像。该任务的主要挑战在于全面有效地整合来自不同模式的互补信息,以实现高质量的重建。现有的方法难以应对这一挑战:(1)基于卷积的模型无法捕获远程依赖关系;(2)基于变压器的模型虽然具有较好的全局特征建模能力,但其计算复杂度为二次型。为了解决这一困境,我们提出了MMR-Mamba,这是一个全新的框架,它彻底有效地集成了MRI重建的多模态特征,利用Mamba的能力,以线性计算复杂性捕获远程依赖关系,同时利用傅里叶域的全局属性。具体而言,我们首先在空间域设计了目标模态引导的Cross Mamba (TCM)模块,该模块通过选择性地结合参考模态的相关信息,最大限度地恢复目标模态信息。然后,我们引入了选择性频率融合(SFF)模块,在傅里叶域中有效地整合全局信息并恢复高频信号以重建结构细节。此外,我们设计了一个自适应空间频率融合(ASFF)模块,该模块通过从一个域补充信息较少的信道来相互增强空间域和频率域。在brat和fastMRI膝关节数据集上进行的大量实验证明了我们的MMR-Mamba比最先进的重建方法的优越性。该代码可在https://github.com/zoujing925/MMR-Mamba上公开获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信