Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby
{"title":"A Machine Learning Model to Harmonize Volumetric Brain MRI Data for Quantitative Neuroradiologic Assessment of Alzheimer Disease.","authors":"Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby","doi":"10.1148/ryai.240030","DOIUrl":"10.1148/ryai.240030","url":null,"abstract":"<p><p>Purpose To extend a previously developed machine learning algorithm for harmonizing brain volumetric data of individuals undergoing neuroradiologic assessment of Alzheimer disease not encountered during model training. Materials and Methods Neuroharmony is a recently developed method that uses image quality metrics as predictors to remove scanner-related effects in brain-volumetric data using random forest regression. To account for the interactions between Alzheimer disease pathology and image quality metrics during harmonization, the authors developed a multiclass extension of Neuroharmony for individuals with and without cognitive impairment. Cross-validation experiments were performed to benchmark performance against other available strategies using data from 20 864 participants with and without cognitive impairment, spanning 11 prospective and retrospective cohorts and 43 scanners. Evaluation metrics assessed the ability to remove scanner-related variations in brain volumes (marker concordance between scanner pairs) while retaining the ability to delineate different diagnostic groups (preserving disease-related signal). Results For each strategy, marker concordances between scanners were significantly better (<i>P</i> < .001) compared with preharmonized data. The proposed multiclass model achieved significantly higher concordance (mean, 0.75 ± 0.09 [SD]) than the Neuroharmony model trained on individuals without cognitive impairment (mean, 0.70 ± 0.11) and preserved disease-related signal (∆AUC [area under the receiver operating characteristic curve] = -0.006 ± 0.027) better than the Neuroharmony model trained on individuals with and without cognitive impairment that did not use the proposed extension (∆AUC = -0.091 ± 0.036). The marker concordance was better in scanners seen during training (concordance > 0.97) than unseen (concordance < 0.79), independent of cognitive status. Conclusion In a large-scale multicenter dataset, the proposed multiclass Neuroharmony model outperformed other available strategies for harmonizing brain volumetric data from unseen scanners in a clinical setting. <b>Keywords:</b> Image Postprocessing, MR Imaging, Dementia, Random Forest <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license See also commentary by Haller in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240030"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad
{"title":"SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans.","authors":"Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad","doi":"10.1148/ryai.240005","DOIUrl":"10.1148/ryai.240005","url":null,"abstract":"<p><p>Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023. The data consisted of T2-weighted MRI scans acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic, and lumbar spine. A deep learning model, SCIseg (which is open source and accessible through the Spinal Cord Toolbox, version 6.2 and above), was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg, and contrast-agnostic, all part of the Spinal Cord Toolbox). The Wilcoxon signed rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results The study included 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 [74%] male patients). SCIseg achieved a mean Dice score of 0.92 ± 0.07 and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (<i>P</i> = .42) and maximal axial damage ratio (<i>P</i> = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and automatically extracted clinically relevant lesion characteristics. <b>Keywords:</b> Spinal Cord, Trauma, Segmentation, MR Imaging, Supervised Learning, Convolutional Neural Network (CNN) Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240005"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock
{"title":"FastMRI Breast: A Publicly Available Radial k-Space Dataset of Breast Dynamic Contrast-enhanced MRI.","authors":"Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock","doi":"10.1148/ryai.240345","DOIUrl":"10.1148/ryai.240345","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240345"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection.","authors":"Tugba Akinci D'Antonoli, Jeffrey D Rudie","doi":"10.1148/ryai.240670","DOIUrl":"10.1148/ryai.240670","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240670"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605141/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth
Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li
{"title":"Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.","authors":"Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li","doi":"10.1148/ryai.230520","DOIUrl":"10.1148/ryai.230520","url":null,"abstract":"<p><p>Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; <i>P</i> < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. <b>Keywords:</b> Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230520"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11605145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}