Robust deep MRI contrast synthesis using a prior-based and task-oriented 3D network.

Imaging neuroscience (Cambridge, Mass.) Pub Date : 2025-08-26 eCollection Date: 2025-01-01 DOI:10.1162/IMAG.a.116
Sergio Morell-Ortega, Marina Ruiz-Perez, Marien Gadea, Roberto Vivo-Hernando, Gregorio Rubio, Fernando Aparici, Mariam de la Iglesia-Vaya, Thomas Tourdias, Boris Mansencal, Pierrick Coupé, José V Manjón
{"title":"Robust deep MRI contrast synthesis using a prior-based and task-oriented 3D network.","authors":"Sergio Morell-Ortega, Marina Ruiz-Perez, Marien Gadea, Roberto Vivo-Hernando, Gregorio Rubio, Fernando Aparici, Mariam de la Iglesia-Vaya, Thomas Tourdias, Boris Mansencal, Pierrick Coupé, José V Manjón","doi":"10.1162/IMAG.a.116","DOIUrl":null,"url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) is one of the most widely used tools for clinical diagnosis. Depending on the acquisition parameters, different image contrasts can be obtained, providing complementary information about the patient's anatomy and potential pathological findings. However, multiplying such acquisitions requires more time, additional resources, and increases patient discomfort. Consequently, not all image modalities are typically acquired. One solution to obtain the missing modalities is to use contrast synthesis methods. Most existing synthesis methods work with 2D slices due to memory limitations, which produces inconsistencies and artifacts when reconstructing the 3D volume. In this work, we present a 3D deep learning-based approach for synthesizing T2-weighted MR volumes from T1-weighted ones. To preserve anatomical details and enhance image quality, we propose a segmentation-oriented loss function combined with a frequency space information loss. To make the proposed method more robust and applicable to a wider range of image scenarios, we also incorporate a priori information in the form of a multi-atlas. Additionally, we employ a semi-supervised learning framework that improves the model's generalizability across diverse datasets, potentially improving its performance in clinical settings with varying patient demographics and imaging protocols. By integrating prior anatomical knowledge with frequency domain and segmentation loss functions, our approach outperforms state-of-the-art methods, particularly in segmentation tasks. The method demonstrates significant improvements, especially in challenging cases, compared with state-of-the-art approaches.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"3 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12392303/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Imaging neuroscience (Cambridge, Mass.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/IMAG.a.116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Magnetic resonance imaging (MRI) is one of the most widely used tools for clinical diagnosis. Depending on the acquisition parameters, different image contrasts can be obtained, providing complementary information about the patient's anatomy and potential pathological findings. However, multiplying such acquisitions requires more time, additional resources, and increases patient discomfort. Consequently, not all image modalities are typically acquired. One solution to obtain the missing modalities is to use contrast synthesis methods. Most existing synthesis methods work with 2D slices due to memory limitations, which produces inconsistencies and artifacts when reconstructing the 3D volume. In this work, we present a 3D deep learning-based approach for synthesizing T2-weighted MR volumes from T1-weighted ones. To preserve anatomical details and enhance image quality, we propose a segmentation-oriented loss function combined with a frequency space information loss. To make the proposed method more robust and applicable to a wider range of image scenarios, we also incorporate a priori information in the form of a multi-atlas. Additionally, we employ a semi-supervised learning framework that improves the model's generalizability across diverse datasets, potentially improving its performance in clinical settings with varying patient demographics and imaging protocols. By integrating prior anatomical knowledge with frequency domain and segmentation loss functions, our approach outperforms state-of-the-art methods, particularly in segmentation tasks. The method demonstrates significant improvements, especially in challenging cases, compared with state-of-the-art approaches.

使用基于先验和面向任务的3D网络的鲁棒深度MRI对比综合。
磁共振成像(MRI)是临床诊断中应用最广泛的工具之一。根据采集参数的不同,可以获得不同的图像对比度,提供有关患者解剖和潜在病理发现的补充信息。然而,增加这样的收购需要更多的时间,额外的资源,并增加患者的不适。因此,并非所有的图像模态通常被获取。获得缺失模态的一种解决方法是使用对比度合成方法。由于内存限制,大多数现有的合成方法只能使用2D切片,这在重建3D体积时产生不一致和伪影。在这项工作中,我们提出了一种基于3D深度学习的方法,用于从t1加权MR体积合成t2加权MR体积。为了保留解剖细节和提高图像质量,我们提出了一种结合频率空间信息损失的面向分割的损失函数。为了使所提出的方法更具鲁棒性并适用于更广泛的图像场景,我们还以多地图集的形式纳入了先验信息。此外,我们采用了半监督学习框架,提高了模型在不同数据集上的泛化性,潜在地提高了其在不同患者人口统计和成像协议的临床环境中的性能。通过将先前的解剖学知识与频域和分割损失函数相结合,我们的方法优于最先进的方法,特别是在分割任务中。与最先进的方法相比,该方法具有显著的改进,特别是在具有挑战性的情况下。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信