Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation. 深度学习应用于扩散加权成像,无需病灶分割即可区分恶性与良性乳腺肿瘤
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-20 DOI: 10.1148/ryai.240206
Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto
{"title":"Deep Learning Applied to Diffusion-weighted Imaging for Differentiating Malignant from Benign Breast Tumors without Lesion Segmentation.","authors":"Mami Iima, Ryosuke Mizuno, Masako Kataoka, Kazuki Tsuji, Toshiki Yamazaki, Akihiko Minami, Maya Honda, Keiho Imanishi, Masahiro Takada, Yuji Nakamoto","doi":"10.1148/ryai.240206","DOIUrl":"https://doi.org/10.1148/ryai.240206","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate and compare performance of different artificial intelligence (AI) models in differentiating between benign and malignant breast tumors on diffusion-weighted imaging (DWI), including comparison with radiologist assessments. Materials and Methods In this retrospective study, patients with breast lesions underwent 3T breast MRI from May 2019 to March 2022. In addition to T1-weighted imaging, T2-weighted imaging, and contrast-enhanced imaging, DWI was acquired five b-values (0, 200, 800, 1000, and 1500 s/mm<sup>2</sup>). DWI data split into training and tuning and test sets were used for the development and assessment of AI models, including a small 2D convolutional neural network (CNN), ResNet18, EfficientNet-B0, and a 3D CNN. Performance of the DWI-based models in differentiating between benign and malignant breast tumors was compared with that of radiologists assessing standard breast MRI, with diagnostic performance assessed using receiver operating characteristic analysis. The study also examined data augmentation effects (A: random elastic deformation, B: random affine transformation/random noise, and C: mixup) on model performance. Results A total of 334 breast lesions in 293 patients (mean age [SD], 56.5 [15.1] years; all female) were analyzed. 2D CNN models outperformed the 3D CNN on the test dataset (area under the receiver operating characteristic curve [AUC] with different data augmentation methods: 0.83-0.88 versus 0.75-0.76). There was no evidence of a difference in performance between the small 2D CNN with augmentations A and B (AUC 0.88) and the radiologists (AUC 0.86) on the test dataset (<i>P</i> = .64). When comparing the small 2D CNN to radiologists, there was no evidence of a difference in specificity (81.4% versus 72.1%; <i>P</i> = .64) or sensitivity (85.9% versus 98.8%; <i>P</i> = .64). Conclusion AI models, particularly a small 2D CNN, showed good performance in differentiating between malignant and benign breast tumors using DWI, without needing manual segmentation. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240206"},"PeriodicalIF":8.1,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142677229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSNA 2023 Abdominal Trauma AI Challenge Review and Outcomes Analysis. RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-06 DOI: 10.1148/ryai.240334
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
{"title":"RSNA 2023 Abdominal Trauma AI Challenge Review and Outcomes Analysis.","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":"https://doi.org/10.1148/ryai.240334","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the performance of the winning machine learning (ML) models from the 2023 RSNA Abdominal Trauma Detection Artificial Intelligence Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26, 2023, to October 15, 2023. The multicenter competition dataset consisted of 4,274 abdominal trauma CT scans in which solid organs (liver, spleen and kidneys) were annotated as healthy, low-grade or high-grade injury. Studies were labeled as positive or negative for the presence of bowel/mesenteric injury and active extravasation. In this study, performances of the 8 award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range:0.91-0.94) for liver, 0.91 (range:0.87-0.93) for splenic, and 0.94 (range:0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range:0.96-0.98) for high-grade liver, 0.98 (range:0.97-0.99) for high-grade splenic, and 0.98 (range:0.97-0.98) for high-grade kidney injuries. For the detection of bowel/mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range:0.74-0.73) and 0.85 (range:0.79-0.89) respectively. Conclusion The award-winning models from the AI challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer. 结合生物学和磁共振成像数据驱动模型预测三阴性乳腺癌患者对新辅助化疗的反应
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-06 DOI: 10.1148/ryai.240124
Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov
{"title":"Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer.","authors":"Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov","doi":"10.1148/ryai.240124","DOIUrl":"10.1148/ryai.240124","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To combine deep learning and biology-based modeling to predict the response of locally advanced, triple negative breast cancer before initiating neoadjuvant chemotherapy (NAC). Materials and Methods In this retrospective study, a biology-based mathematical model of tumor response to NAC was constructed and calibrated on a patient-specific basis using imaging data from patients enrolled in the MD Anderson ARTEMIS trial (ClinicalTrials.gov, NCT02276443) between April 2018 and May 2021. To relate the calibrated parameters in the biology-based model and pretreatment MRI data, a convolutional neural network (CNN) was employed. The CNN predictions of the calibrated model parameters were used to estimate tumor response at the end of NAC. CNN performance in the estimations of total tumor volume (TTV), total tumor cellularity (TTC), and tumor status was evaluated. Model-predicted TTC and TTV measurements were compared with MRI-based measurements using the concordance correlation coefficient (CCC), and area under the receiver operating characteristic curve (for predicting pathologic complete response at the end of NAC). Results The study included 118 female patients (median age, 51 [range, 29-78] years). For comparison of CNN predicted to measured change in TTC and TTV over the course of NAC, the CCCs were 0.95 (95% CI: 0.90-0.98) and 0.94 (95% CI: 0.87-0.97), respectively. CNN-predicted TTC and TTV had an AUC of 0.72 (95% CI: 0.34-0.94) and 0.72 (95% CI: 0.40-0.95) for predicting tumor status at the time of surgery, respectively. Conclusion Deep learning integrated with a biology-based mathematical model showed good performance in predicting the spatial and temporal evolution of a patient's tumor during NAC using only pre-NAC MRI data. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240124"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography. 利用 CT 血管造影检测、分割和形态分析颅内动脉瘤的集成深度学习模型。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-06 DOI: 10.1148/ryai.240017
Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu
{"title":"Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography.","authors":"Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu","doi":"10.1148/ryai.240017","DOIUrl":"https://doi.org/10.1148/ryai.240017","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning model for the morphologic measurement of unruptured intracranial aneurysms (UIAs) based on CT angiography (CTA) data and validate its performance using a multicenter dataset. Materials and Methods In this retrospective study, patients with CTA examinations, including those with and without UIAs, in a tertiary referral hospital from February 2018 to February 2021 were included as the training dataset. Patients with UIAs who underwent CTA at multiple centers between April 2021 to December 2022 were included as the multicenter external testing set. An integrated deep-learning (IDL) model was developed for UIA detection, segmentation and morphologic measurement using an nnU-net algorithm. Model performance was evaluated using the Dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC), with measurements by senior radiologists serving as the reference standard. The ability of the IDL model to improve performance of junior radiologists in measuring morphologic UIA features was assessed. Results The study included 1182 patients with UIAs and 578 controls without UIAs as the training dataset (55 years [IQR, 47-62], 1,012 [57.5%] females) and 535 patients with UIAs as the multicenter external testing set (57 years [IQR, 50-63], 353 [66.0%] females). The IDL model achieved 97% accuracy in detecting UIAs and achieved a DSC of 0.90 (95%CI, 0.88-0.92) for UIA segmentation. Model-based morphologic measurements showed good agreement with reference standard measurements (all ICCs > 0.85). Within the multicenter external testing set, the IDL model also showed agreement with reference standard measurements (all ICCs > 0.80). Junior radiologists assisted by the IDL model showed significantly improved performance in measuring UIA size (ICC improved from 0.88 [0.80-0.92] to 0.96 [0.92-0.97], <i>P</i> < .001). Conclusion The developed integrated deep learning model using CTA data showed good performance in UIA detection, segmentation and morphologic measurement and may be used to assist less experienced radiologists in morphologic analysis of UIAs. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240017"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans. SCIseg:在 T2 加权磁共振成像扫描中自动分割脊髓损伤的髓内病变。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-06 DOI: 10.1148/ryai.240005
Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad
{"title":"SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans.","authors":"Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad","doi":"10.1148/ryai.240005","DOIUrl":"10.1148/ryai.240005","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023 from 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 males). The data consisted of T2-weighted MRI acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic and lumbar spine. A deep learning model, SCIseg, was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg and contrast-agnostic, all part of the Spinal Cord Toolbox). Wilcoxon signed-rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results SCIseg achieved a Dice score of 0.92 ± 0.07 (mean ± SD) and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (<i>P</i> = .42) and maximal axial damage ratio (<i>P</i> = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and extracted relevant lesion biomarkers (namely, lesion volume, lesion length, and maximal axial damage ratio). SCIseg is open-source and accessible through the Spinal Cord Toolbox (v6.2 and above). Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240005"},"PeriodicalIF":8.1,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection. 事半功倍:结合强弱标签检测颅内出血。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.240670
Tugba Akinci D'Antonoli, Jeffrey D Rudie
{"title":"Achieving More with Less: Combining Strong and Weak Labels for Intracranial Hemorrhage Detection.","authors":"Tugba Akinci D'Antonoli, Jeffrey D Rudie","doi":"10.1148/ryai.240670","DOIUrl":"https://doi.org/10.1148/ryai.240670","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"6 6","pages":"e240670"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Addressing the Generalizability of AI in Radiology Using a Novel Data Augmentation Framework with Synthetic Patient Image Data: Proof-of-Concept and External Validation for Classification Tasks in Multiple Sclerosis. 利用合成患者图像数据的新型数据增强框架解决放射学中人工智能的通用性问题:多发性硬化症的概念验证和外部验证分类任务。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.230514
Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth
{"title":"Addressing the Generalizability of AI in Radiology Using a Novel Data Augmentation Framework with Synthetic Patient Image Data: Proof-of-Concept and External Validation for Classification Tasks in Multiple Sclerosis.","authors":"Gianluca Brugnara, Chandrakanth Jayachandran Preetha, Katerina Deike, Robert Haase, Thomas Pinetz, Martha Foltyn-Dumitru, Mustafa A Mahmutoglu, Brigitte Wildemann, Ricarda Diem, Wolfgang Wick, Alexander Radbruch, Martin Bendszus, Hagen Meredig, Aditya Rastogi, Philipp Vollmuth","doi":"10.1148/ryai.230514","DOIUrl":"10.1148/ryai.230514","url":null,"abstract":"<p><p>Artificial intelligence (AI) models often face performance drops after deployment to external datasets. This study evaluated the potential of a novel data augmentation framework based on generative adversarial networks (GANs) that creates synthetic patient image data for model training to improve model generalizability. Model development and external testing were performed for a given classification task, namely the detection of new fluid-attenuated inversion recovery lesions at MRI during longitudinal follow-up of patients with multiple sclerosis (MS). An internal dataset of 669 patients with MS (<i>n</i> = 3083 examinations) was used to develop an attention-based network, trained both with and without the inclusion of the GAN-based synthetic data augmentation framework. External testing was performed on 134 patients with MS from a different institution, with MR images acquired using different scanners and protocols than images used during training. Models trained using synthetic data augmentation showed a significant performance improvement when applied on external data (area under the receiver operating characteristic curve [AUC], 83.6% without synthetic data vs 93.3% with synthetic data augmentation; <i>P</i> = .03), achieving comparable results to the internal test set (AUC, 95.0%; <i>P</i> = .53), whereas models without synthetic data augmentation demonstrated a performance drop upon external testing (AUC, 93.8% on internal dataset vs 83.6% on external data; <i>P</i> = .03). Data augmentation with synthetic patient data substantially improved performance of AI models on unseen MRI data and may be extended to other clinical conditions or tasks to mitigate domain shift, limit class imbalance, and enhance the robustness of AI applications in medical imaging. <b>Keywords:</b> Brain, Brain Stem, Multiple Sclerosis, Synthetic Data Augmentation, Generative Adversarial Network <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230514"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142476382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information. 通过整合放射学报告信息促进深度学习,实现可解释的脑磁共振成像病灶检测。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.230520
Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li
{"title":"Boosting Deep Learning for Interpretable Brain MRI Lesion Detection through the Integration of Radiology Report Information.","authors":"Lisong Dai, Jiayu Lei, Fenglong Ma, Zheng Sun, Haiyan Du, Houwang Zhang, Jingxuan Jiang, Jianyong Wei, Dan Wang, Guang Tan, Xinyu Song, Jinyu Zhu, Qianqian Zhao, Songtao Ai, Ai Shang, Zhaohui Li, Ya Zhang, Yuehua Li","doi":"10.1148/ryai.230520","DOIUrl":"10.1148/ryai.230520","url":null,"abstract":"<p><p>Purpose To guide the attention of a deep learning (DL) model toward MRI characteristics of brain lesions by incorporating radiology report-derived textual features to achieve interpretable lesion detection. Materials and Methods In this retrospective study, 35 282 brain MRI scans (January 2018 to June 2023) and corresponding radiology reports from center 1 were used for training, validation, and internal testing. A total of 2655 brain MRI scans (January 2022 to December 2022) from centers 2-5 were reserved for external testing. Textual features were extracted from radiology reports to guide a DL model (ReportGuidedNet) focusing on lesion characteristics. Another DL model (PlainNet) without textual features was developed for comparative analysis. Both models identified 15 conditions, including 14 diseases and normal brains. Performance of each model was assessed by calculating macro-averaged area under the receiver operating characteristic curve (ma-AUC) and micro-averaged AUC (mi-AUC). Attention maps, which visualized model attention, were assessed with a five-point Likert scale. Results ReportGuidedNet outperformed PlainNet for all diagnoses on both internal (ma-AUC, 0.93 [95% CI: 0.91, 0.95] vs 0.85 [95% CI: 0.81, 0.88]; mi-AUC, 0.93 [95% CI: 0.90, 0.95] vs 0.89 [95% CI: 0.83, 0.92]) and external (ma-AUC, 0.91 [95% CI: 0.88, 0.93] vs 0.75 [95% CI: 0.72, 0.79]; mi-AUC, 0.90 [95% CI: 0.87, 0.92] vs 0.76 [95% CI: 0.72, 0.80]) testing sets. The performance difference between internal and external testing sets was smaller for ReportGuidedNet than for PlainNet (Δma-AUC, 0.03 vs 0.10; Δmi-AUC, 0.02 vs 0.13). The Likert scale score of ReportGuidedNet was higher than that of PlainNet (mean ± SD: 2.50 ± 1.09 vs 1.32 ± 1.20; <i>P</i> < .001). Conclusion The integration of radiology report textual features improved the ability of the DL model to detect brain lesions, thereby enhancing interpretability and generalizability. <b>Keywords:</b> Deep Learning, Computer-aided Diagnosis, Knowledge-driven Model, Radiology Report, Brain MRI <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230520"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142393849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset. RSNA 腹部创伤 CT (RATIC) 数据集。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.240101
Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak
{"title":"The RSNA Abdominal Traumatic Injury CT (RATIC) Dataset.","authors":"Jeffrey D Rudie, Hui-Ming Lin, Robyn L Ball, Sabeena Jalal, Luciano M Prevedello, Savvas Nicolaou, Brett S Marinelli, Adam E Flanders, Kirti Magudia, George Shih, Melissa A Davis, John Mongan, Peter D Chang, Ferco H Berger, Sebastiaan Hermans, Meng Law, Tyler Richards, Jan-Peter Grunz, Andreas Steven Kunz, Shobhit Mathur, Sandro Galea-Soler, Andrew D Chung, Saif Afat, Chin-Chi Kuo, Layal Aweidah, Ana Villanueva Campos, Arjuna Somasundaram, Felipe Antonio Sanchez Tijmes, Attaporn Jantarangkoon, Leonardo Kayat Bittencourt, Michael Brassil, Ayoub El Hajjami, Hakan Dogan, Muris Becircic, Agrahara G Bharatkumar, Eduardo Moreno Júdice de Mattos Farina, Errol Colak","doi":"10.1148/ryai.240101","DOIUrl":"10.1148/ryai.240101","url":null,"abstract":"<p><p>\u0000 <i>Supplemental material is available for this article.</i>\u0000 </p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240101"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels. 利用研究级标签训练的深度学习模型对头部 CT 扫描颅内出血进行图像级精确定位。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-01 DOI: 10.1148/ryai.230296
Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill
{"title":"Precise Image-level Localization of Intracranial Hemorrhage on Head CT Scans with Deep Learning Models Trained on Study-level Labels.","authors":"Yunan Wu, Michael Iorga, Suvarna Badhe, James Zhang, Donald R Cantrell, Elaine J Tanhehco, Nicholas Szrama, Andrew M Naidech, Michael Drakopoulos, Shamis T Hasan, Kunal M Patel, Tarek A Hijaz, Eric J Russell, Shamal Lalvani, Amit Adate, Todd B Parrish, Aggelos K Katsaggelos, Virginia B Hill","doi":"10.1148/ryai.230296","DOIUrl":"10.1148/ryai.230296","url":null,"abstract":"<p><p>Purpose To develop a highly generalizable weakly supervised model to automatically detect and localize image-level intracranial hemorrhage (ICH) by using study-level labels. Materials and Methods In this retrospective study, the proposed model was pretrained on the image-level Radiological Society of North America dataset and fine-tuned on a local dataset by using attention-based bidirectional long short-term memory networks. This local training dataset included 10 699 noncontrast head CT scans in 7469 patients, with ICH study-level labels extracted from radiology reports. Model performance was compared with that of two senior neuroradiologists on 100 random test scans using the McNemar test, and its generalizability was evaluated on an external independent dataset. Results The model achieved a positive predictive value (PPV) of 85.7% (95% CI: 84.0, 87.4) and an area under the receiver operating characteristic curve of 0.96 (95% CI: 0.96, 0.97) on the held-out local test set (<i>n</i> = 7243, 3721 female) and 89.3% (95% CI: 87.8, 90.7) and 0.96 (95% CI: 0.96, 0.97), respectively, on the external test set (<i>n</i> = 491, 178 female). For 100 randomly selected samples, the model achieved performance on par with two neuroradiologists, but with a significantly faster (<i>P</i> < .05) diagnostic time of 5.04 seconds per scan (vs 86 seconds and 22.2 seconds for the two neuroradiologists, respectively). The model's attention weights and heatmaps visually aligned with neuroradiologists' interpretations. Conclusion The proposed model demonstrated high generalizability and high PPVs, offering a valuable tool for expedited ICH detection and prioritization while reducing false-positive interruptions in radiologists' workflows. <b>Keywords:</b> Computer-Aided Diagnosis (CAD), Brain/Brain Stem, Hemorrhage, Convolutional Neural Network (CNN), Transfer Learning <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Akinci D'Antonoli and Rudie in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230296"},"PeriodicalIF":8.1,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142081915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信