Honglin Xiong , Yulin Wang , Zhenrong Shen , Kaicong Sun , Yu Fang , Yan Chen , Dinggang Shen , Qian Wang
{"title":"Learning contrast and content representations for synthesizing magnetic resonance image of arbitrary contrast","authors":"Honglin Xiong , Yulin Wang , Zhenrong Shen , Kaicong Sun , Yu Fang , Yan Chen , Dinggang Shen , Qian Wang","doi":"10.1016/j.media.2025.103635","DOIUrl":null,"url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) produces images with different contrasts, providing complementary information for clinical diagnoses and research. However, acquiring a complete set of MRI sequences can be challenging due to limitations such as lengthy scan time, motion artifacts, hardware constraints, and patient-related factors. To address this issue, we propose a novel method to learn Contrast and Content Representations (CCR) for cross-contrast MRI synthesis. Unlike existing approaches that implicitly model relationships between different contrasts, our key insight is to explicitly separate contrast information from anatomical content, allowing for more flexible and accurate synthesis. CCR learns a unified content representation that captures the underlying anatomical structures common to all contrasts, along with separate contrast representations that encode specific contrast information. By recombining the learned content representation with an arbitrary contrast representation, our method can synthesize MR images of any desired contrast. We validate our approach on both the BraTS 2021 dataset and an in-house dataset with diverse FSE acquisition parameters. Our experiments demonstrate that our CCR framework not only handles diverse input–output contrast combinations using a single trained model but also generalizes to synthesize images of new contrasts unseen during training. Quantitatively, CCR outperforms state-of-the-art methods by an average of 2.9 dB in PSNR and 0.08 in SSIM across all tested combinations. The code is available at <span><span>https://github.com/xionghonglin/Arbitrary_Contrast_MRI_Synthesis</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"104 ","pages":"Article 103635"},"PeriodicalIF":10.7000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525001823","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Magnetic Resonance Imaging (MRI) produces images with different contrasts, providing complementary information for clinical diagnoses and research. However, acquiring a complete set of MRI sequences can be challenging due to limitations such as lengthy scan time, motion artifacts, hardware constraints, and patient-related factors. To address this issue, we propose a novel method to learn Contrast and Content Representations (CCR) for cross-contrast MRI synthesis. Unlike existing approaches that implicitly model relationships between different contrasts, our key insight is to explicitly separate contrast information from anatomical content, allowing for more flexible and accurate synthesis. CCR learns a unified content representation that captures the underlying anatomical structures common to all contrasts, along with separate contrast representations that encode specific contrast information. By recombining the learned content representation with an arbitrary contrast representation, our method can synthesize MR images of any desired contrast. We validate our approach on both the BraTS 2021 dataset and an in-house dataset with diverse FSE acquisition parameters. Our experiments demonstrate that our CCR framework not only handles diverse input–output contrast combinations using a single trained model but also generalizes to synthesize images of new contrasts unseen during training. Quantitatively, CCR outperforms state-of-the-art methods by an average of 2.9 dB in PSNR and 0.08 in SSIM across all tested combinations. The code is available at https://github.com/xionghonglin/Arbitrary_Contrast_MRI_Synthesis.
期刊介绍:
Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.