Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth
下载PDF
{"title":"基于深度学习的脑磁共振成像序列识别,使用在大型多中心研究队列中训练的模型。","authors":"Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth","doi":"10.1148/ryai.230095","DOIUrl":null,"url":null,"abstract":"<p><p>Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high <i>b</i> value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ<sup>2</sup> tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; <i>P</i> ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (<i>P</i> > .05). Conclusion The developed CNN (<i>www.github.com/neuroAI-HD/HD-SEQ-ID</i>) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. <b>Keywords:</b> MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 1","pages":"e230095"},"PeriodicalIF":8.1000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831512/pdf/","citationCount":"0","resultStr":"{\"title\":\"Deep Learning-based Identification of Brain MRI Sequences Using a Model Trained on Large Multicentric Study Cohorts.\",\"authors\":\"Mustafa Ahmed Mahmutoglu, Chandrakanth Jayachandran Preetha, Hagen Meredig, Joerg-Christian Tonn, Michael Weller, Wolfgang Wick, Martin Bendszus, Gianluca Brugnara, Philipp Vollmuth\",\"doi\":\"10.1148/ryai.230095\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Purpose To develop a fully automated device- and sequence-independent convolutional neural network (CNN) for reliable and high-throughput labeling of heterogeneous, unstructured MRI data. Materials and Methods Retrospective, multicentric brain MRI data (2179 patients with glioblastoma, 8544 examinations, 63 327 sequences) from 249 hospitals and 29 scanner types were used to develop a network based on ResNet-18 architecture to differentiate nine MRI sequence types, including T1-weighted, postcontrast T1-weighted, T2-weighted, fluid-attenuated inversion recovery, susceptibility-weighted, apparent diffusion coefficient, diffusion-weighted (low and high <i>b</i> value), and gradient-recalled echo T2*-weighted and dynamic susceptibility contrast-related images. The two-dimensional-midsection images from each sequence were allocated to training or validation (approximately 80%) and testing (approximately 20%) using a stratified split to ensure balanced groups across institutions, patients, and MRI sequence types. The prediction accuracy was quantified for each sequence type, and subgroup comparison of model performance was performed using χ<sup>2</sup> tests. Results On the test set, the overall accuracy of the CNN (ResNet-18) ensemble model among all sequence types was 97.9% (95% CI: 97.6, 98.1), ranging from 84.2% for susceptibility-weighted images (95% CI: 81.8, 86.6) to 99.8% for T2-weighted images (95% CI: 99.7, 99.9). The ResNet-18 model achieved significantly better accuracy compared with ResNet-50 despite its simpler architecture (97.9% vs 97.1%; <i>P</i> ≤ .001). The accuracy of the ResNet-18 model was not affected by the presence versus absence of tumor on the two-dimensional-midsection images for any sequence type (<i>P</i> > .05). Conclusion The developed CNN (<i>www.github.com/neuroAI-HD/HD-SEQ-ID</i>) reliably differentiates nine types of MRI sequences within multicenter and large-scale population neuroimaging data and may enhance the speed, accuracy, and efficiency of clinical and research neuroradiologic workflows. <b>Keywords:</b> MR-Imaging, Neural Networks, CNS, Brain/Brain Stem, Computer Applications-General (Informatics), Convolutional Neural Network (CNN), Deep Learning Algorithms, Machine Learning Algorithms <i>Supplemental material is available for this article.</i> © RSNA, 2023.</p>\",\"PeriodicalId\":29787,\"journal\":{\"name\":\"Radiology-Artificial Intelligence\",\"volume\":\"6 1\",\"pages\":\"e230095\"},\"PeriodicalIF\":8.1000,\"publicationDate\":\"2024-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10831512/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Radiology-Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1148/ryai.230095\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.230095","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
引用
批量引用