Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad
{"title":"SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans.","authors":"Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad","doi":"10.1148/ryai.240005","DOIUrl":"10.1148/ryai.240005","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023 from 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 males). The data consisted of T2-weighted MRI acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic and lumbar spine. A deep learning model, SCIseg, was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg and contrast-agnostic, all part of the Spinal Cord Toolbox). Wilcoxon signed-rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results SCIseg achieved a Dice score of 0.92 ± 0.07 (mean ± SD) and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (<i>P</i> = .42) and maximal axial damage ratio (<i>P</i> = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and extracted relevant lesion biomarkers (namely, lesion volume, lesion length, and maximal axial damage ratio). SCIseg is open-source and accessible through the Spinal Cord Toolbox (v6.2 and above). Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection.","authors":"Tugba Akinci D'Antonoli, Jeffrey D Rudie","doi":"10.1148/ryai.240670","DOIUrl":"https://doi.org/10.1148/ryai.240670","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth
Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li
{"title":"Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.","authors":"Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li","doi":"10.1148/ryai.230520","DOIUrl":"10.1148/ryai.230520","url":null,"abstract":"<p><p>Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; <i>P</i> < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. <b>Keywords:</b> Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill
Krzysztof Bartnik, Tomasz Bartczak, Mateusz Krzyziński, Krzysztof Korzeniowski, Krzysztof Lamparski, Piotr Węgrzyn, Eric Lam, Mateusz Bartkowiak, Tadeusz Wróblewski, Katarzyna Mech, Magdalena Januszewicz, Przemysław Biecek
{"title":"WAW-TACE: A Hepatocellular Carcinoma Multiphase CT Dataset with Segmentations, Radiomics Features, and Clinical Data.","authors":"Krzysztof Bartnik, Tomasz Bartczak, Mateusz Krzyziński, Krzysztof Korzeniowski, Krzysztof Lamparski, Piotr Węgrzyn, Eric Lam, Mateusz Bartkowiak, Tadeusz Wróblewski, Katarzyna Mech, Magdalena Januszewicz, Przemysław Biecek","doi":"10.1148/ryai.240296","DOIUrl":"https://doi.org/10.1148/ryai.240296","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> The WAW-TACE dataset contains baseline multiphase abdominal CT images from 233 treatment-naive patients with hepatocellular carcinoma treated with transarterial chemoembolization and includes with 377 hand-crafted liver tumor masks, automated segmentations of multiple internal organs, extracted radiomics features, and corresponding extensive clinical data. The dataset can be accessed at: https://zenodo.org/records/12741586 (DOI:10.5281/zenodo.11063784).</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":8.1,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak