Multi-channel MRI reconstruction using cascaded Swinμ transformers with overlapped attention.

IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL
Tahsin Rahman, Ali Bilgin, Sergio D Cabrera
{"title":"Multi-channel MRI reconstruction using cascaded Swinμ transformers with overlapped attention.","authors":"Tahsin Rahman, Ali Bilgin, Sergio D Cabrera","doi":"10.1088/1361-6560/adb933","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Deep neural networks have been shown to be very effective at artifact reduction tasks such as magnetic resonance imaging (MRI) reconstruction from undersampled k-space data. In recent years, attention-based vision transformer models have been shown to outperform purely convolutional models at a wide variety of tasks, including MRI reconstruction. Our objective is to investigate the use of different transformer architectures for multi-channel cascaded MRI reconstruction.<i>Approach.</i>In this work, we explore the effective use of cascades of small transformers in multi-channel undersampled MRI reconstruction. We introduce overlapped attention and compare it to hybrid attention in shifted-window (Swin) transformers. We also investigate the impact of the number of Swin transformer layers in each architecture. The proposed methods are compared to state-of-the-art MRI reconstruction methods for undersampled reconstruction on standard 3T and low-field (0.3T) T1-weighted MRI images at multiple acceleration rates.<i>Main results.</i>The models with overlapped attention achieve significantly higher or equivalent quantitative test metrics compared to state-of-the-art convolutional approaches. They also show more consistent reconstruction performance across different acceleration rates compared to their hybrid attention counterparts. We have also shown that transformer architectures with fewer layers can be as effective as those with more layers when used in cascaded MRI reconstruction problems.<i>Significance.</i>The feasibility and effectiveness of cascades of small transformers with overlapped attention for MRI reconstruction is demonstrated without incorporating pre-training of the transformer on ImageNet or other large-scale datasets.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":"70 7","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics in medicine and biology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6560/adb933","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective.Deep neural networks have been shown to be very effective at artifact reduction tasks such as magnetic resonance imaging (MRI) reconstruction from undersampled k-space data. In recent years, attention-based vision transformer models have been shown to outperform purely convolutional models at a wide variety of tasks, including MRI reconstruction. Our objective is to investigate the use of different transformer architectures for multi-channel cascaded MRI reconstruction.Approach.In this work, we explore the effective use of cascades of small transformers in multi-channel undersampled MRI reconstruction. We introduce overlapped attention and compare it to hybrid attention in shifted-window (Swin) transformers. We also investigate the impact of the number of Swin transformer layers in each architecture. The proposed methods are compared to state-of-the-art MRI reconstruction methods for undersampled reconstruction on standard 3T and low-field (0.3T) T1-weighted MRI images at multiple acceleration rates.Main results.The models with overlapped attention achieve significantly higher or equivalent quantitative test metrics compared to state-of-the-art convolutional approaches. They also show more consistent reconstruction performance across different acceleration rates compared to their hybrid attention counterparts. We have also shown that transformer architectures with fewer layers can be as effective as those with more layers when used in cascaded MRI reconstruction problems.Significance.The feasibility and effectiveness of cascades of small transformers with overlapped attention for MRI reconstruction is demonstrated without incorporating pre-training of the transformer on ImageNet or other large-scale datasets.

求助全文
约1分钟内获得全文 求助全文
来源期刊
Physics in medicine and biology
Physics in medicine and biology 医学-工程:生物医学
CiteScore
6.50
自引率
14.30%
发文量
409
审稿时长
2 months
期刊介绍: The development and application of theoretical, computational and experimental physics to medicine, physiology and biology. Topics covered are: therapy physics (including ionizing and non-ionizing radiation); biomedical imaging (e.g. x-ray, magnetic resonance, ultrasound, optical and nuclear imaging); image-guided interventions; image reconstruction and analysis (including kinetic modelling); artificial intelligence in biomedical physics and analysis; nanoparticles in imaging and therapy; radiobiology; radiation protection and patient dose monitoring; radiation dosimetry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信