Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby
{"title":"A Machine Learning Model to Harmonize Volumetric Brain MRI Data for Quantitative Neuroradiologic Assessment of Alzheimer Disease.","authors":"Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby","doi":"10.1148/ryai.240030","DOIUrl":"10.1148/ryai.240030","url":null,"abstract":"<p><p>Purpose To extend a previously developed machine learning algorithm for harmonizing brain volumetric data of individuals undergoing neuroradiologic assessment of Alzheimer disease not encountered during model training. Materials and Methods Neuroharmony is a recently developed method that uses image quality metrics as predictors to remove scanner-related effects in brain-volumetric data using random forest regression. To account for the interactions between Alzheimer disease pathology and image quality metrics during harmonization, the authors developed a multiclass extension of Neuroharmony for individuals with and without cognitive impairment. Cross-validation experiments were performed to benchmark performance against other available strategies using data from 20 864 participants with and without cognitive impairment, spanning 11 prospective and retrospective cohorts and 43 scanners. Evaluation metrics assessed the ability to remove scanner-related variations in brain volumes (marker concordance between scanner pairs) while retaining the ability to delineate different diagnostic groups (preserving disease-related signal). Results For each strategy, marker concordances between scanners were significantly better (<i>P</i> < .001) compared with preharmonized data. The proposed multiclass model achieved significantly higher concordance (mean, 0.75 ± 0.09 [SD]) than the Neuroharmony model trained on individuals without cognitive impairment (mean, 0.70 ± 0.11) and preserved disease-related signal (∆AUC [area under the receiver operating characteristic curve] = -0.006 ± 0.027) better than the Neuroharmony model trained on individuals with and without cognitive impairment that did not use the proposed extension (∆AUC = -0.091 ± 0.036). The marker concordance was better in scanners seen during training (concordance > 0.97) than unseen (concordance < 0.79), independent of cognitive status. Conclusion In a large-scale multicenter dataset, the proposed multiclass Neuroharmony model outperformed other available strategies for harmonizing brain volumetric data from unseen scanners in a clinical setting. <b>Keywords:</b> Image Postprocessing, MR Imaging, Dementia, Random Forest <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license See also commentary by Haller in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240030"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad
{"title":"SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans.","authors":"Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad","doi":"10.1148/ryai.240005","DOIUrl":"10.1148/ryai.240005","url":null,"abstract":"<p><p>Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023. The data consisted of T2-weighted MRI scans acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic, and lumbar spine. A deep learning model, SCIseg (which is open source and accessible through the Spinal Cord Toolbox, version 6.2 and above), was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg, and contrast-agnostic, all part of the Spinal Cord Toolbox). The Wilcoxon signed rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results The study included 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 [74%] male patients). SCIseg achieved a mean Dice score of 0.92 ± 0.07 and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (<i>P</i> = .42) and maximal axial damage ratio (<i>P</i> = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and automatically extracted clinically relevant lesion characteristics. <b>Keywords:</b> Spinal Cord, Trauma, Segmentation, MR Imaging, Supervised Learning, Convolutional Neural Network (CNN) Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240005"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock
{"title":"FastMRI Breast: A Publicly Available Radial k-Space Dataset of Breast Dynamic Contrast-enhanced MRI.","authors":"Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock","doi":"10.1148/ryai.240345","DOIUrl":"10.1148/ryai.240345","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240345"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}