Mojtaba Safari, Shansong Wang, Zach Eidex, Richard Qiu, Chih-Wei Chang, David S Yu, Xiaofeng Yang
{"title":"A Physics-Informed Deep Learning Model for MRI Brain Motion Correction.","authors":"Mojtaba Safari, Shansong Wang, Zach Eidex, Richard Qiu, Chih-Wei Chang, David S Yu, Xiaofeng Yang","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Magnetic resonance imaging (MRI) is an essential brain imaging tool, but its long acquisition times make it highly susceptible to motion artifacts that can degrade diagnostic quality.</p><p><strong>Purpose: </strong>This work aims to develop and evaluate a novel physics-informed motion correction network, termed PI-MoCoNet, which leverages complementary information from both the spatial and <i>k</i>-space domains. The primary goal is to robustly remove motion artifacts from high-resolution brain MRI images without explicit motion parameter estimation, thereby preserving image fidelity and enhancing diagnostic reliability.</p><p><strong>Materials and methods: </strong>PI-MoCoNet is designed as a dual-network framework consisting of a motion detection network and a motion correction network. The motion detection network employs a U-net architecture to identify corrupted <i>k</i>-space lines using a spatial averaging module, thereby reducing prediction uncertainty. The correction network, inspired by recent advances in U-net architectures and incorporating Swin Transformer blocks, reconstructs motion-corrected images by leveraging three loss components: the reconstruction loss <math> <mrow><mrow><mo>(</mo> <mrow><msub><mtext>𝓛</mtext> <mn>1</mn></msub> </mrow> <mo>)</mo></mrow> </mrow> </math> , a learned perceptual image patch similarity (LPIPS) loss, and a data consistency loss <math> <mrow><mrow><mo>(</mo> <mrow><msub><mtext>𝓛</mtext> <mtext>dc</mtext></msub> </mrow> <mo>)</mo></mrow> </mrow> </math> that enforces fidelity in the <i>k</i>-space domain. Realistic motion artifacts were simulated by perturbing phase encoding lines with random rigid transformations. The method was evaluated on two public datasets (IXI and MR-ART). Comparative assessments were made against baseline models, including Pix2Pix GAN, CycleGAN, and a conventional U-net, using quantitative metrics such as peak signal-to-noise ratio(PSNR), structural similarity index measure (SSIM), and normalized mean square error (NMSE).</p><p><strong>Results: </strong>PI-MoCoNet demonstrated significant improvements over competing methods across all levels of motion artifacts. On the IXI dataset, for minor motion artifacts, PSNR improved from 34.15 dB in the motion-corrupted images to 45.95 dB after correction, SSIM increased from 0.87 to 1.00, and NMSE was reduced from 0.55% to 0.04%. For moderate artifacts, PSNR increased from 30.23 dB to 42.16 dB, SSIM from 0.80 to 0.99, and NMSE from 1.32% to 0.09%. In the case of heavy artifacts, PSNR improved from 27.99 dB to 36.01 dB, SSIM from 0.75 to 0.97, and NMSE decreased from 2.21% to 0.36%. On the MR-ART dataset, PSNR values increased from 23.15 dB to 33.01 dB for low artifact levels and from 21.23 dB to 31.72 dB for high artifact levels; concurrently, SSIM improved from 0.72 to 0.87 and from 0.63 to 0.83, while NMSE decreased from 10.08% to 6.24% and from 14.77% to 8.32%, respectively. An ablation study further confirmed that incorporating both data consistency and perceptual losses led to an approximate 1 dB gain in PSNR and a reduction of 0.17% in NMSE compared to using the reconstruction loss alone.</p><p><strong>Conclusions: </strong>PI-MoCoNet is a robust, physics-informed framework for mitigating brain motion artifacts in MRI. It successfully integrates spatial and <i>k</i>-space information to enhance image quality. Its superior performance over comparative methods highlights its potential for clinical application, particularly in settings where patient motion is unavoidable. The source code is available at: https://github.com/mosaf/PI-MoCoNet.git.</p>","PeriodicalId":93888,"journal":{"name":"ArXiv","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11844622/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ArXiv","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Magnetic resonance imaging (MRI) is an essential brain imaging tool, but its long acquisition times make it highly susceptible to motion artifacts that can degrade diagnostic quality.
Purpose: This work aims to develop and evaluate a novel physics-informed motion correction network, termed PI-MoCoNet, which leverages complementary information from both the spatial and k-space domains. The primary goal is to robustly remove motion artifacts from high-resolution brain MRI images without explicit motion parameter estimation, thereby preserving image fidelity and enhancing diagnostic reliability.
Materials and methods: PI-MoCoNet is designed as a dual-network framework consisting of a motion detection network and a motion correction network. The motion detection network employs a U-net architecture to identify corrupted k-space lines using a spatial averaging module, thereby reducing prediction uncertainty. The correction network, inspired by recent advances in U-net architectures and incorporating Swin Transformer blocks, reconstructs motion-corrected images by leveraging three loss components: the reconstruction loss , a learned perceptual image patch similarity (LPIPS) loss, and a data consistency loss that enforces fidelity in the k-space domain. Realistic motion artifacts were simulated by perturbing phase encoding lines with random rigid transformations. The method was evaluated on two public datasets (IXI and MR-ART). Comparative assessments were made against baseline models, including Pix2Pix GAN, CycleGAN, and a conventional U-net, using quantitative metrics such as peak signal-to-noise ratio(PSNR), structural similarity index measure (SSIM), and normalized mean square error (NMSE).
Results: PI-MoCoNet demonstrated significant improvements over competing methods across all levels of motion artifacts. On the IXI dataset, for minor motion artifacts, PSNR improved from 34.15 dB in the motion-corrupted images to 45.95 dB after correction, SSIM increased from 0.87 to 1.00, and NMSE was reduced from 0.55% to 0.04%. For moderate artifacts, PSNR increased from 30.23 dB to 42.16 dB, SSIM from 0.80 to 0.99, and NMSE from 1.32% to 0.09%. In the case of heavy artifacts, PSNR improved from 27.99 dB to 36.01 dB, SSIM from 0.75 to 0.97, and NMSE decreased from 2.21% to 0.36%. On the MR-ART dataset, PSNR values increased from 23.15 dB to 33.01 dB for low artifact levels and from 21.23 dB to 31.72 dB for high artifact levels; concurrently, SSIM improved from 0.72 to 0.87 and from 0.63 to 0.83, while NMSE decreased from 10.08% to 6.24% and from 14.77% to 8.32%, respectively. An ablation study further confirmed that incorporating both data consistency and perceptual losses led to an approximate 1 dB gain in PSNR and a reduction of 0.17% in NMSE compared to using the reconstruction loss alone.
Conclusions: PI-MoCoNet is a robust, physics-informed framework for mitigating brain motion artifacts in MRI. It successfully integrates spatial and k-space information to enhance image quality. Its superior performance over comparative methods highlights its potential for clinical application, particularly in settings where patient motion is unavoidable. The source code is available at: https://github.com/mosaf/PI-MoCoNet.git.