Gowtham Murugesan, Fang F. Yu, Michael Achilleos, John DeBevits, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Ananth J Madhuranthakam, Joseph A. Maldjian
{"title":"Synthesizing Contrast-Enhanced MR Images from Noncontrast MR Images Using Deep Learning","authors":"Gowtham Murugesan, Fang F. Yu, Michael Achilleos, John DeBevits, Sahil Nalawade, Chandan Ganesh, Ben Wagner, Ananth J Madhuranthakam, Joseph A. Maldjian","doi":"10.3174/ajnr.a8107","DOIUrl":null,"url":null,"abstract":"<sec><st>BACKGROUND AND PURPOSE:</st>\n<p>Recent developments in deep learning methods offer a potential solution to the need for alternative imaging methods due to concerns about the toxicity of gadolinium-based contrast agents. The purpose of the study was to synthesize virtual gadolinium contrast-enhanced T1-weighted MR images from noncontrast multiparametric MR images in patients with primary brain tumors by using deep learning.</p>\n</sec>\n<sec><st>MATERIALS AND METHODS:</st>\n<p>We trained and validated a deep learning network by using MR images from 335 subjects in the Brain Tumor Segmentation Challenge 2019 training data set. A held out set of 125 subjects from the Brain Tumor Segmentation Challenge 2019 validation data set was used to test the generalization of the model. A residual inception DenseNet network, called T1c-ET, was developed and trained to simultaneously synthesize virtual contrast-enhanced T1-weighted (vT1c) images and segment the enhancing portions of the tumor. Three expert neuroradiologists independently scored the synthesized vT1c images by using a 3-point Likert scale, evaluating image quality and contrast enhancement against ground truth T1c images (1 = poor, 2 = good, 3 = excellent).</p>\n</sec>\n<sec><st>RESULTS:</st>\n<p>The synthesized vT1c images achieved structural similarity index, peak signal-to-noise ratio, and normalized mean square error scores of 0.91, 64.35, and 0.03, respectively. There was moderate interobserver agreement between the 3 raters, regarding the algorithm’s performance in predicting contrast enhancement, with a Fleiss kappa value of 0.61. Our model was able to accurately predict contrast enhancement in 88.8% of the cases (scores of 2 to 3 on the 3-point scale).</p>\n</sec>\n<sec><st>CONCLUSIONS:</st>\n<p>We developed a novel deep learning architecture to synthesize virtual postcontrast enhancement by using only conventional noncontrast brain MR images. Our results demonstrate the potential of deep learning methods to reduce the need for gadolinium contrast in the evaluation of primary brain tumors.</p>\n</sec>","PeriodicalId":7875,"journal":{"name":"American Journal of Neuroradiology","volume":"17 1","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"American Journal of Neuroradiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3174/ajnr.a8107","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"CLINICAL NEUROLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
BACKGROUND AND PURPOSE:
Recent developments in deep learning methods offer a potential solution to the need for alternative imaging methods due to concerns about the toxicity of gadolinium-based contrast agents. The purpose of the study was to synthesize virtual gadolinium contrast-enhanced T1-weighted MR images from noncontrast multiparametric MR images in patients with primary brain tumors by using deep learning.
MATERIALS AND METHODS:
We trained and validated a deep learning network by using MR images from 335 subjects in the Brain Tumor Segmentation Challenge 2019 training data set. A held out set of 125 subjects from the Brain Tumor Segmentation Challenge 2019 validation data set was used to test the generalization of the model. A residual inception DenseNet network, called T1c-ET, was developed and trained to simultaneously synthesize virtual contrast-enhanced T1-weighted (vT1c) images and segment the enhancing portions of the tumor. Three expert neuroradiologists independently scored the synthesized vT1c images by using a 3-point Likert scale, evaluating image quality and contrast enhancement against ground truth T1c images (1 = poor, 2 = good, 3 = excellent).
RESULTS:
The synthesized vT1c images achieved structural similarity index, peak signal-to-noise ratio, and normalized mean square error scores of 0.91, 64.35, and 0.03, respectively. There was moderate interobserver agreement between the 3 raters, regarding the algorithm’s performance in predicting contrast enhancement, with a Fleiss kappa value of 0.61. Our model was able to accurately predict contrast enhancement in 88.8% of the cases (scores of 2 to 3 on the 3-point scale).
CONCLUSIONS:
We developed a novel deep learning architecture to synthesize virtual postcontrast enhancement by using only conventional noncontrast brain MR images. Our results demonstrate the potential of deep learning methods to reduce the need for gadolinium contrast in the evaluation of primary brain tumors.
期刊介绍:
The mission of AJNR is to further knowledge in all aspects of neuroimaging, head and neck imaging, and spine imaging for neuroradiologists, radiologists, trainees, scientists, and associated professionals through print and/or electronic publication of quality peer-reviewed articles that lead to the highest standards in patient care, research, and education and to promote discussion of these and other issues through its electronic activities.