A Physics-Informed Deep Learning Model for MRI Brain Motion Correction.

ArXiv Pub Date : 2025-02-13
Mojtaba Safari, Shansong Wang, Zach Eidex, Richard Qiu, Chih-Wei Chang, David S Yu, Xiaofeng Yang
{"title":"A Physics-Informed Deep Learning Model for MRI Brain Motion Correction.","authors":"Mojtaba Safari, Shansong Wang, Zach Eidex, Richard Qiu, Chih-Wei Chang, David S Yu, Xiaofeng Yang","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Magnetic resonance imaging (MRI) is an essential brain imaging tool, but its long acquisition times make it highly susceptible to motion artifacts that can degrade diagnostic quality.</p><p><strong>Purpose: </strong>This work aims to develop and evaluate a novel physics-informed motion correction network, termed PI-MoCoNet, which leverages complementary information from both the spatial and <i>k</i>-space domains. The primary goal is to robustly remove motion artifacts from high-resolution brain MRI images without explicit motion parameter estimation, thereby preserving image fidelity and enhancing diagnostic reliability.</p><p><strong>Materials and methods: </strong>PI-MoCoNet is designed as a dual-network framework consisting of a motion detection network and a motion correction network. The motion detection network employs a U-net architecture to identify corrupted <i>k</i>-space lines using a spatial averaging module, thereby reducing prediction uncertainty. The correction network, inspired by recent advances in U-net architectures and incorporating Swin Transformer blocks, reconstructs motion-corrected images by leveraging three loss components: the reconstruction loss <math> <mrow><mrow><mo>(</mo> <mrow><msub><mtext>𝓛</mtext> <mn>1</mn></msub> </mrow> <mo>)</mo></mrow> </mrow> </math> , a learned perceptual image patch similarity (LPIPS) loss, and a data consistency loss <math> <mrow><mrow><mo>(</mo> <mrow><msub><mtext>𝓛</mtext> <mtext>dc</mtext></msub> </mrow> <mo>)</mo></mrow> </mrow> </math> that enforces fidelity in the <i>k</i>-space domain. Realistic motion artifacts were simulated by perturbing phase encoding lines with random rigid transformations. The method was evaluated on two public datasets (IXI and MR-ART). Comparative assessments were made against baseline models, including Pix2Pix GAN, CycleGAN, and a conventional U-net, using quantitative metrics such as peak signal-to-noise ratio(PSNR), structural similarity index measure (SSIM), and normalized mean square error (NMSE).</p><p><strong>Results: </strong>PI-MoCoNet demonstrated significant improvements over competing methods across all levels of motion artifacts. On the IXI dataset, for minor motion artifacts, PSNR improved from 34.15 dB in the motion-corrupted images to 45.95 dB after correction, SSIM increased from 0.87 to 1.00, and NMSE was reduced from 0.55% to 0.04%. For moderate artifacts, PSNR increased from 30.23 dB to 42.16 dB, SSIM from 0.80 to 0.99, and NMSE from 1.32% to 0.09%. In the case of heavy artifacts, PSNR improved from 27.99 dB to 36.01 dB, SSIM from 0.75 to 0.97, and NMSE decreased from 2.21% to 0.36%. On the MR-ART dataset, PSNR values increased from 23.15 dB to 33.01 dB for low artifact levels and from 21.23 dB to 31.72 dB for high artifact levels; concurrently, SSIM improved from 0.72 to 0.87 and from 0.63 to 0.83, while NMSE decreased from 10.08% to 6.24% and from 14.77% to 8.32%, respectively. An ablation study further confirmed that incorporating both data consistency and perceptual losses led to an approximate 1 dB gain in PSNR and a reduction of 0.17% in NMSE compared to using the reconstruction loss alone.</p><p><strong>Conclusions: </strong>PI-MoCoNet is a robust, physics-informed framework for mitigating brain motion artifacts in MRI. It successfully integrates spatial and <i>k</i>-space information to enhance image quality. Its superior performance over comparative methods highlights its potential for clinical application, particularly in settings where patient motion is unavoidable. The source code is available at: https://github.com/mosaf/PI-MoCoNet.git.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11844622/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Magnetic resonance imaging (MRI) is an essential brain imaging tool, but its long acquisition times make it highly susceptible to motion artifacts that can degrade diagnostic quality.

Purpose: This work aims to develop and evaluate a novel physics-informed motion correction network, termed PI-MoCoNet, which leverages complementary information from both the spatial and k-space domains. The primary goal is to robustly remove motion artifacts from high-resolution brain MRI images without explicit motion parameter estimation, thereby preserving image fidelity and enhancing diagnostic reliability.

Materials and methods: PI-MoCoNet is designed as a dual-network framework consisting of a motion detection network and a motion correction network. The motion detection network employs a U-net architecture to identify corrupted k-space lines using a spatial averaging module, thereby reducing prediction uncertainty. The correction network, inspired by recent advances in U-net architectures and incorporating Swin Transformer blocks, reconstructs motion-corrected images by leveraging three loss components: the reconstruction loss ( 𝓛 1 ) , a learned perceptual image patch similarity (LPIPS) loss, and a data consistency loss ( 𝓛 dc ) that enforces fidelity in the k-space domain. Realistic motion artifacts were simulated by perturbing phase encoding lines with random rigid transformations. The method was evaluated on two public datasets (IXI and MR-ART). Comparative assessments were made against baseline models, including Pix2Pix GAN, CycleGAN, and a conventional U-net, using quantitative metrics such as peak signal-to-noise ratio(PSNR), structural similarity index measure (SSIM), and normalized mean square error (NMSE).

Results: PI-MoCoNet demonstrated significant improvements over competing methods across all levels of motion artifacts. On the IXI dataset, for minor motion artifacts, PSNR improved from 34.15 dB in the motion-corrupted images to 45.95 dB after correction, SSIM increased from 0.87 to 1.00, and NMSE was reduced from 0.55% to 0.04%. For moderate artifacts, PSNR increased from 30.23 dB to 42.16 dB, SSIM from 0.80 to 0.99, and NMSE from 1.32% to 0.09%. In the case of heavy artifacts, PSNR improved from 27.99 dB to 36.01 dB, SSIM from 0.75 to 0.97, and NMSE decreased from 2.21% to 0.36%. On the MR-ART dataset, PSNR values increased from 23.15 dB to 33.01 dB for low artifact levels and from 21.23 dB to 31.72 dB for high artifact levels; concurrently, SSIM improved from 0.72 to 0.87 and from 0.63 to 0.83, while NMSE decreased from 10.08% to 6.24% and from 14.77% to 8.32%, respectively. An ablation study further confirmed that incorporating both data consistency and perceptual losses led to an approximate 1 dB gain in PSNR and a reduction of 0.17% in NMSE compared to using the reconstruction loss alone.

Conclusions: PI-MoCoNet is a robust, physics-informed framework for mitigating brain motion artifacts in MRI. It successfully integrates spatial and k-space information to enhance image quality. Its superior performance over comparative methods highlights its potential for clinical application, particularly in settings where patient motion is unavoidable. The source code is available at: https://github.com/mosaf/PI-MoCoNet.git.

背景:磁共振成像是脑成像的关键,但由于采集时间长,极易受到运动伪影的影响。本研究介绍了一种物理信息运动校正网络 PI-MoCoNet,它整合了空间和 k 空间信息,无需明确的运动参数估计即可去除运动伪影,从而提高图像保真度和诊断可靠性:PI-MoCoNet 由一个运动检测网络(带空间平均的 U-网络)和一个运动校正网络(带 Swin 变换器块的 U-网络)组成,前者用于识别损坏的 k 空间线,后者用于重建无运动图像。校正由三个损失函数引导:重建(L1)、感知(LPIPS)和数据一致性(Ldc)。通过刚性相位编码扰动模拟运动伪影,并在 IXI 和 MR-ART 数据集上使用 PSNR、SSIM 和 NMSE 对 Pix2Pix、CycleGAN 和 U-net 进行评估:结果:PI-MoCoNet 明显改善了图像质量。在 IXI 上,对于轻微伪像,PSNR 从 34.15 dB 提高到 45.95 dB,SSIM 从 0.87 提高到 1.00,NMSE 从 0.55% 降低到 0.04%。对于中度伪像,PSNR 从 30.23 dB 提高到 42.16 dB,SSIM 从 0.80 提高到 0.99,NMSE 从 1.32% 降低到 0.09%。对于重度伪影,PSNR 从 27.99 dB 上升到 36.01 dB,SSIM 从 0.75 上升到 0.97,NMSE 从 2.21% 下降到 0.36%。在 MR-ART 上,PI-MoCoNet 的 PSNR 提高了约 10 dB,SSIM 提高了 0.20,NMSE 降低了约 6%。消融研究证实了数据一致性和感知损失的重要性,PSNR 提高了 1 dB,NMSE 降低了 0.17%:结论:PI-MoCoNet 能有效减轻脑磁共振成像中的运动伪影,优于现有方法。它整合空间和 k 空间信息的能力使其成为一种很有前途的工具,可用于易发生运动的临床环境。代码:https://github.com/mosaf/PI-MoCoNet.git。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信