Efficient 3D magnetic resonance image reconstruction by 2D transformers and attention-based fusion model

IF 4.9 2区 医学 Q1 ENGINEERING, BIOMEDICAL
Tianyi Yang , Xiaohan Liu , Yiming Liu , Xuebin Sun , Zhenchang Wang , Yanwei Pang
{"title":"Efficient 3D magnetic resonance image reconstruction by 2D transformers and attention-based fusion model","authors":"Tianyi Yang ,&nbsp;Xiaohan Liu ,&nbsp;Yiming Liu ,&nbsp;Xuebin Sun ,&nbsp;Zhenchang Wang ,&nbsp;Yanwei Pang","doi":"10.1016/j.bspc.2025.108071","DOIUrl":null,"url":null,"abstract":"<div><div>Utilizing 3D networks for reconstructing images from undersampled 3D <em>k</em>-space data shows potential in accelerating 3D MR imaging. CNN-based reconstruction network U-Net is such a classical method and is widely used in accelerating MRI research. However, directly reconstructing 3D MR images using 3D U-Net, due to the huge amount of 3D MRI data and high computational complexity of 3D convolution, results in significant memory consumption. To address the dependence of 3D MRI reconstruction on high computing resources, we propose an efficient 3D MRI Slice-to-Volume and Fusion Reconstruction (SVFR) method, which reduces the memory consumption requirements by 40% compared to 3D U-Net. Specifically, the proposed method integrates attention-based reconstruction and fusion models into a unified framework. For saving computational resources, instead of directly processing 3D data, we select 2D undersampled slices from three mutually orthogonal directions, and introduce the pretrained 2D Vision Transformer into MRI reconstruction field, reconstructing 3D MR images from 2D slices. In addition, for compensating the loss of spatial details between adjacent slices caused by the process of reconstructing slices, we employ a volume-wise fusion model to extract deep features of reconstructed 3D MR images along original three directions and fuse them on a spatial level, preserving finer spatial details. The experimental results on large 3D multi-coil brain <em>k</em>-space dataset and Stanford Fullysampled 3D FSE Knees dataset clearly demonstrate that the proposed method exhibits excellent reconstruction performance and efficiency under various accelerations.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"110 ","pages":"Article 108071"},"PeriodicalIF":4.9000,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425005828","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Utilizing 3D networks for reconstructing images from undersampled 3D k-space data shows potential in accelerating 3D MR imaging. CNN-based reconstruction network U-Net is such a classical method and is widely used in accelerating MRI research. However, directly reconstructing 3D MR images using 3D U-Net, due to the huge amount of 3D MRI data and high computational complexity of 3D convolution, results in significant memory consumption. To address the dependence of 3D MRI reconstruction on high computing resources, we propose an efficient 3D MRI Slice-to-Volume and Fusion Reconstruction (SVFR) method, which reduces the memory consumption requirements by 40% compared to 3D U-Net. Specifically, the proposed method integrates attention-based reconstruction and fusion models into a unified framework. For saving computational resources, instead of directly processing 3D data, we select 2D undersampled slices from three mutually orthogonal directions, and introduce the pretrained 2D Vision Transformer into MRI reconstruction field, reconstructing 3D MR images from 2D slices. In addition, for compensating the loss of spatial details between adjacent slices caused by the process of reconstructing slices, we employ a volume-wise fusion model to extract deep features of reconstructed 3D MR images along original three directions and fuse them on a spatial level, preserving finer spatial details. The experimental results on large 3D multi-coil brain k-space dataset and Stanford Fullysampled 3D FSE Knees dataset clearly demonstrate that the proposed method exhibits excellent reconstruction performance and efficiency under various accelerations.
基于二维变压器和基于注意力的融合模型的高效三维磁共振图像重建
利用3D网络从欠采样的3D k空间数据中重建图像,显示了加速3D MR成像的潜力。基于cnn的重构网络U-Net就是这样一种经典的方法,被广泛应用于加速MRI研究。然而,使用3D U-Net直接重建3D MRI图像,由于3D MRI数据量巨大,3D卷积计算复杂度高,导致内存消耗显著。为了解决3D MRI重建对高计算资源的依赖,我们提出了一种高效的3D MRI切片到体积和融合重建(SVFR)方法,与3D U-Net相比,该方法将内存消耗要求降低了40%。具体而言,该方法将基于注意力的重建和融合模型集成到一个统一的框架中。为了节省计算资源,我们不直接处理三维数据,而是从三个相互正交的方向选择二维欠采样切片,并将预训练好的二维视觉变压器引入MRI重建领域,从二维切片中重建三维MR图像。此外,为了补偿由于切片重建过程中相邻切片之间的空间细节损失,我们采用了一种基于体的融合模型,沿原始三个方向提取重建的3D MR图像的深度特征,并在空间水平上进行融合,以保留更精细的空间细节。在大型三维多线圈脑k空间数据集和Stanford fullyssampling的三维FSE膝关节数据集上的实验结果清楚地表明,该方法在各种加速度下具有优异的重建性能和效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical Signal Processing and Control
Biomedical Signal Processing and Control 工程技术-工程:生物医学
CiteScore
9.80
自引率
13.70%
发文量
822
审稿时长
4 months
期刊介绍: Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management. Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信