Learning contrast and content representations for synthesizing magnetic resonance image of arbitrary contrast

IF 10.7 1区 医学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Honglin Xiong , Yulin Wang , Zhenrong Shen , Kaicong Sun , Yu Fang , Yan Chen , Dinggang Shen , Qian Wang
{"title":"Learning contrast and content representations for synthesizing magnetic resonance image of arbitrary contrast","authors":"Honglin Xiong ,&nbsp;Yulin Wang ,&nbsp;Zhenrong Shen ,&nbsp;Kaicong Sun ,&nbsp;Yu Fang ,&nbsp;Yan Chen ,&nbsp;Dinggang Shen ,&nbsp;Qian Wang","doi":"10.1016/j.media.2025.103635","DOIUrl":null,"url":null,"abstract":"<div><div>Magnetic Resonance Imaging (MRI) produces images with different contrasts, providing complementary information for clinical diagnoses and research. However, acquiring a complete set of MRI sequences can be challenging due to limitations such as lengthy scan time, motion artifacts, hardware constraints, and patient-related factors. To address this issue, we propose a novel method to learn Contrast and Content Representations (CCR) for cross-contrast MRI synthesis. Unlike existing approaches that implicitly model relationships between different contrasts, our key insight is to explicitly separate contrast information from anatomical content, allowing for more flexible and accurate synthesis. CCR learns a unified content representation that captures the underlying anatomical structures common to all contrasts, along with separate contrast representations that encode specific contrast information. By recombining the learned content representation with an arbitrary contrast representation, our method can synthesize MR images of any desired contrast. We validate our approach on both the BraTS 2021 dataset and an in-house dataset with diverse FSE acquisition parameters. Our experiments demonstrate that our CCR framework not only handles diverse input–output contrast combinations using a single trained model but also generalizes to synthesize images of new contrasts unseen during training. Quantitatively, CCR outperforms state-of-the-art methods by an average of 2.9 dB in PSNR and 0.08 in SSIM across all tested combinations. The code is available at <span><span>https://github.com/xionghonglin/Arbitrary_Contrast_MRI_Synthesis</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":"104 ","pages":"Article 103635"},"PeriodicalIF":10.7000,"publicationDate":"2025-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical image analysis","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1361841525001823","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Magnetic Resonance Imaging (MRI) produces images with different contrasts, providing complementary information for clinical diagnoses and research. However, acquiring a complete set of MRI sequences can be challenging due to limitations such as lengthy scan time, motion artifacts, hardware constraints, and patient-related factors. To address this issue, we propose a novel method to learn Contrast and Content Representations (CCR) for cross-contrast MRI synthesis. Unlike existing approaches that implicitly model relationships between different contrasts, our key insight is to explicitly separate contrast information from anatomical content, allowing for more flexible and accurate synthesis. CCR learns a unified content representation that captures the underlying anatomical structures common to all contrasts, along with separate contrast representations that encode specific contrast information. By recombining the learned content representation with an arbitrary contrast representation, our method can synthesize MR images of any desired contrast. We validate our approach on both the BraTS 2021 dataset and an in-house dataset with diverse FSE acquisition parameters. Our experiments demonstrate that our CCR framework not only handles diverse input–output contrast combinations using a single trained model but also generalizes to synthesize images of new contrasts unseen during training. Quantitatively, CCR outperforms state-of-the-art methods by an average of 2.9 dB in PSNR and 0.08 in SSIM across all tested combinations. The code is available at https://github.com/xionghonglin/Arbitrary_Contrast_MRI_Synthesis.
学习合成任意对比磁共振图像的对比与内容表示
磁共振成像(MRI)产生不同对比度的图像,为临床诊断和研究提供补充信息。然而,由于扫描时间长、运动伪影、硬件限制和患者相关因素等限制,获得一套完整的MRI序列可能具有挑战性。为了解决这个问题,我们提出了一种新的方法来学习对比和内容表示(CCR),用于交叉对比MRI合成。与现有的隐式模拟不同对比之间关系的方法不同,我们的关键见解是明确地将对比信息从解剖内容中分离出来,从而允许更灵活和准确的合成。CCR学习一种统一的内容表示,它捕获所有对比共有的底层解剖结构,以及编码特定对比信息的单独对比表示。通过将学习到的内容表示与任意对比度表示进行重组,我们的方法可以合成任意对比度的MR图像。我们在BraTS 2021数据集和具有不同FSE采集参数的内部数据集上验证了我们的方法。我们的实验表明,我们的CCR框架不仅使用单个训练模型处理不同的输入输出对比度组合,而且还可以泛化到合成训练中未见过的新对比度图像。在所有测试组合中,CCR的PSNR平均优于最先进的方法2.9 dB, SSIM平均优于0.08。代码可在https://github.com/xionghonglin/Arbitrary_Contrast_MRI_Synthesis上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical image analysis
Medical image analysis 工程技术-工程:生物医学
CiteScore
22.10
自引率
6.40%
发文量
309
审稿时长
6.6 months
期刊介绍: Medical Image Analysis serves as a platform for sharing new research findings in the realm of medical and biological image analysis, with a focus on applications of computer vision, virtual reality, and robotics to biomedical imaging challenges. The journal prioritizes the publication of high-quality, original papers contributing to the fundamental science of processing, analyzing, and utilizing medical and biological images. It welcomes approaches utilizing biomedical image datasets across all spatial scales, from molecular/cellular imaging to tissue/organ imaging.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信