Sergio Morell-Ortega, Marina Ruiz-Perez, Marien Gadea, Roberto Vivo-Hernando, Gregorio Rubio, Fernando Aparici, Mariam de la Iglesia-Vaya, Thomas Tourdias, Boris Mansencal, Pierrick Coupé, José V Manjón
{"title":"Robust deep MRI contrast synthesis using a prior-based and task-oriented 3D network.","authors":"Sergio Morell-Ortega, Marina Ruiz-Perez, Marien Gadea, Roberto Vivo-Hernando, Gregorio Rubio, Fernando Aparici, Mariam de la Iglesia-Vaya, Thomas Tourdias, Boris Mansencal, Pierrick Coupé, José V Manjón","doi":"10.1162/IMAG.a.116","DOIUrl":null,"url":null,"abstract":"<p><p>Magnetic resonance imaging (MRI) is one of the most widely used tools for clinical diagnosis. Depending on the acquisition parameters, different image contrasts can be obtained, providing complementary information about the patient's anatomy and potential pathological findings. However, multiplying such acquisitions requires more time, additional resources, and increases patient discomfort. Consequently, not all image modalities are typically acquired. One solution to obtain the missing modalities is to use contrast synthesis methods. Most existing synthesis methods work with 2D slices due to memory limitations, which produces inconsistencies and artifacts when reconstructing the 3D volume. In this work, we present a 3D deep learning-based approach for synthesizing T2-weighted MR volumes from T1-weighted ones. To preserve anatomical details and enhance image quality, we propose a segmentation-oriented loss function combined with a frequency space information loss. To make the proposed method more robust and applicable to a wider range of image scenarios, we also incorporate a priori information in the form of a multi-atlas. Additionally, we employ a semi-supervised learning framework that improves the model's generalizability across diverse datasets, potentially improving its performance in clinical settings with varying patient demographics and imaging protocols. By integrating prior anatomical knowledge with frequency domain and segmentation loss functions, our approach outperforms state-of-the-art methods, particularly in segmentation tasks. The method demonstrates significant improvements, especially in challenging cases, compared with state-of-the-art approaches.</p>","PeriodicalId":73341,"journal":{"name":"Imaging neuroscience (Cambridge, Mass.)","volume":"3 ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12392303/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Imaging neuroscience (Cambridge, Mass.)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1162/IMAG.a.116","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Magnetic resonance imaging (MRI) is one of the most widely used tools for clinical diagnosis. Depending on the acquisition parameters, different image contrasts can be obtained, providing complementary information about the patient's anatomy and potential pathological findings. However, multiplying such acquisitions requires more time, additional resources, and increases patient discomfort. Consequently, not all image modalities are typically acquired. One solution to obtain the missing modalities is to use contrast synthesis methods. Most existing synthesis methods work with 2D slices due to memory limitations, which produces inconsistencies and artifacts when reconstructing the 3D volume. In this work, we present a 3D deep learning-based approach for synthesizing T2-weighted MR volumes from T1-weighted ones. To preserve anatomical details and enhance image quality, we propose a segmentation-oriented loss function combined with a frequency space information loss. To make the proposed method more robust and applicable to a wider range of image scenarios, we also incorporate a priori information in the form of a multi-atlas. Additionally, we employ a semi-supervised learning framework that improves the model's generalizability across diverse datasets, potentially improving its performance in clinical settings with varying patient demographics and imaging protocols. By integrating prior anatomical knowledge with frequency domain and segmentation loss functions, our approach outperforms state-of-the-art methods, particularly in segmentation tasks. The method demonstrates significant improvements, especially in challenging cases, compared with state-of-the-art approaches.