MRI super-resolution reconstruction using efficient diffusion probabilistic model with residual shifting.

IF 3.3 3区 医学 Q2 ENGINEERING, BIOMEDICAL
Mojtaba Safari, Shansong Wang, Zach Eidex, Qiang Li, Richard L J Qiu, Erik H Middlebrooks, David S Yu, Xiaofeng Yang
{"title":"MRI super-resolution reconstruction using efficient diffusion probabilistic model with residual shifting.","authors":"Mojtaba Safari, Shansong Wang, Zach Eidex, Qiang Li, Richard L J Qiu, Erik H Middlebrooks, David S Yu, Xiaofeng Yang","doi":"10.1088/1361-6560/ade049","DOIUrl":null,"url":null,"abstract":"<p><p><i>Objective.</i>Magnetic resonance imaging (MRI) is essential in clinical and research contexts, providing exceptional soft-tissue contrast. However, prolonged acquisition times often lead to patient discomfort and motion artifacts. Diffusion-based deep learning super-resolution (SR) techniques reconstruct high-resolution (HR) images from low-resolution (LR) pairs, but they involve extensive sampling steps, limiting real-time application. To overcome these issues, this study introduces a residual error-shifting mechanism markedly reducing sampling steps while maintaining vital anatomical details, thereby accelerating MRI reconstruction.<i>Approach.</i>We developed Res-SRDiff, a novel diffusion-based SR framework incorporating residual error shifting into the forward diffusion process. This integration aligns the degraded HR and LR distributions, enabling efficient HR image reconstruction. We evaluated Res-SRDiff using ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate images, benchmarking it against Bicubic, Pix2pix, CycleGAN, SPSR, I<sup>2</sup>SB, and TM-DDPM methods. Quantitative assessments employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), gradient magnitude similarity deviation (GMSD), and learned perceptual image patch similarity. Additionally, we qualitatively and quantitatively assessed the proposed framework's individual components through an ablation study and conducted a Likert-based image quality evaluation.<i>Main results.</i>Res-SRDiff significantly surpassed most comparison methods regarding PSNR, SSIM, and GMSD for both datasets, with statistically significant improvements (p-values≪0.05). The model achieved high-fidelity image reconstruction using only four sampling steps, drastically reducing computation time to under one second per slice. In contrast, traditional methods like TM-DDPM and I<sup>2</sup>SB required approximately 20 and 38 s per slice, respectively. Qualitative analysis showed Res-SRDiff effectively preserved fine anatomical details and lesion morphologies. The Likert study indicated that our method received the highest scores,4.14±0.77(brain) and4.80±0.40(prostate).<i>Significance.</i>Res-SRDiff demonstrates efficiency and accuracy, markedly improving computational speed and image quality. Incorporating residual error shifting into diffusion-based SR facilitates rapid, robust HR image reconstruction, enhancing clinical MRI workflow and advancing medical imaging research. Code available athttps://github.com/mosaf/Res-SRDiff.</p>","PeriodicalId":20185,"journal":{"name":"Physics in medicine and biology","volume":" ","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167925/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Physics in medicine and biology","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1088/1361-6560/ade049","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective.Magnetic resonance imaging (MRI) is essential in clinical and research contexts, providing exceptional soft-tissue contrast. However, prolonged acquisition times often lead to patient discomfort and motion artifacts. Diffusion-based deep learning super-resolution (SR) techniques reconstruct high-resolution (HR) images from low-resolution (LR) pairs, but they involve extensive sampling steps, limiting real-time application. To overcome these issues, this study introduces a residual error-shifting mechanism markedly reducing sampling steps while maintaining vital anatomical details, thereby accelerating MRI reconstruction.Approach.We developed Res-SRDiff, a novel diffusion-based SR framework incorporating residual error shifting into the forward diffusion process. This integration aligns the degraded HR and LR distributions, enabling efficient HR image reconstruction. We evaluated Res-SRDiff using ultra-high-field brain T1 MP2RAGE maps and T2-weighted prostate images, benchmarking it against Bicubic, Pix2pix, CycleGAN, SPSR, I2SB, and TM-DDPM methods. Quantitative assessments employed peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), gradient magnitude similarity deviation (GMSD), and learned perceptual image patch similarity. Additionally, we qualitatively and quantitatively assessed the proposed framework's individual components through an ablation study and conducted a Likert-based image quality evaluation.Main results.Res-SRDiff significantly surpassed most comparison methods regarding PSNR, SSIM, and GMSD for both datasets, with statistically significant improvements (p-values≪0.05). The model achieved high-fidelity image reconstruction using only four sampling steps, drastically reducing computation time to under one second per slice. In contrast, traditional methods like TM-DDPM and I2SB required approximately 20 and 38 s per slice, respectively. Qualitative analysis showed Res-SRDiff effectively preserved fine anatomical details and lesion morphologies. The Likert study indicated that our method received the highest scores,4.14±0.77(brain) and4.80±0.40(prostate).Significance.Res-SRDiff demonstrates efficiency and accuracy, markedly improving computational speed and image quality. Incorporating residual error shifting into diffusion-based SR facilitates rapid, robust HR image reconstruction, enhancing clinical MRI workflow and advancing medical imaging research. Code available athttps://github.com/mosaf/Res-SRDiff.

基于残差偏移的高效扩散概率模型的MRI超分辨重建。
目的:磁共振成像(MRI)在临床和研究背景下是必不可少的,提供卓越的软组织对比。然而,长时间的获取往往导致患者不适和运动伪影。基于扩散的深度学习超分辨率(SR)技术从低分辨率(LR)对中重建高分辨率(HR)图像,但它们涉及大量采样步骤,限制了实时应用。为了克服这些问题,本研究引入了残差偏移机制,在保持重要解剖细节的同时显著减少采样步骤,从而加速MRI重建。方法:我们开发了Res-SRDiff,这是一种新的基于扩散的SR框架,将残差移位纳入前向扩散过程。这种集成将退化的HR和LR分布对齐,从而实现高效的HR图像重建。我们使用超高场脑T1 MP2RAGE图和t2加权前列腺图像评估Res-SRDiff,并将其与Bicubic、Pix2pix、CycleGAN、SPSR、I2SB和TM-DDPM方法进行基准比较。定量评价采用峰值信噪比(PSNR)、结构相似指数(SSIM)、梯度幅度相似偏差(GMSD)和学习感知图像斑块相似度(LPIPS)。此外,我们通过消融研究定性和定量地评估了提出的框架的各个组成部分,并进行了基于likert的图像质量评估。主要结果:Res-SRDiff在两个数据集的PSNR、SSIM和GMSD方面都显著优于大多数比较方法,在统计学上有显著改善(p值≪0.05)。该模型仅使用四个采样步骤就实现了高保真图像重建,大大减少了计算时间,每片不到1秒。相比之下,传统的方法,如TM-DDPM和I2SB,每片分别需要大约20秒和38秒。定性分析表明,Res-SRDiff有效地保留了病变的解剖细节和形态。Likert研究表明,我们的方法获得了最高的评分,脑(4.14±0.77)和前列腺(4.80±0.40)。意义:Res-SRDiff显示了效率和准确性,显著提高了计算速度和图像质量。将残差偏移整合到基于弥散的磁共振成像中,有助于快速、稳健地重建HR图像,增强临床MRI工作流程,推进医学成像研究。代码可从https://github.com/mosaf/Res-SRDiff获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Physics in medicine and biology
Physics in medicine and biology 医学-工程:生物医学
CiteScore
6.50
自引率
14.30%
发文量
409
审稿时长
2 months
期刊介绍: The development and application of theoretical, computational and experimental physics to medicine, physiology and biology. Topics covered are: therapy physics (including ionizing and non-ionizing radiation); biomedical imaging (e.g. x-ray, magnetic resonance, ultrasound, optical and nuclear imaging); image-guided interventions; image reconstruction and analysis (including kinetic modelling); artificial intelligence in biomedical physics and analysis; nanoparticles in imaging and therapy; radiobiology; radiation protection and patient dose monitoring; radiation dosimetry
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信