Radiology-Artificial Intelligence最新文献

筛选
英文 中文
RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes. RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240334
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
{"title":"RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes.","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":"10.1148/ryai.240334","url":null,"abstract":"<p><p>Purpose To evaluate the performance of the winning machine learning models from the 2023 RSNA Abdominal Trauma Detection AI Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26 and October 15, 2023. The multicenter competition dataset consisted of 4274 abdominal trauma CT scans, in which solid organs (liver, spleen, and kidneys) were annotated as healthy, low-grade, or high-grade injury. Studies were labeled as positive or negative for the presence of bowel and mesenteric injury and active extravasation. In this study, performances of the eight award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range, 0.90-0.94) for liver, 0.91 (range, 0.87-0.93) for splenic, and 0.94 (range, 0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range, 0.96-0.98) for high-grade liver, 0.98 (range, 0.97-0.99) for high-grade splenic, and 0.98 (range, 0.97-0.98) for high-grade kidney injuries. For the detection of bowel and mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range, 0.74-0.93) and 0.85 (range, 0.79-0.89), respectively. Conclusion The award-winning models from the artificial intelligence challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. <b>Keywords:</b> Abdominal Trauma, CT, American Association for the Surgery of Trauma, Machine Learning, Artificial Intelligence <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms. 利用人工智能改进未破裂颅内动脉瘤的诊断。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240696
Shuncong Wang
{"title":"Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms.","authors":"Shuncong Wang","doi":"10.1148/ryai.240696","DOIUrl":"https://doi.org/10.1148/ryai.240696","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 1","pages":"e240696"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer. 结合生物学和磁共振成像数据驱动模型预测三阴性乳腺癌患者对新辅助化疗的反应
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240124
Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov
{"title":"Combining Biology-based and MRI Data-driven Modeling to Predict Response to Neoadjuvant Chemotherapy in Patients with Triple-Negative Breast Cancer.","authors":"Casey E Stowers, Chengyue Wu, Zhan Xu, Sidharth Kumar, Clinton Yam, Jong Bum Son, Jingfei Ma, Jonathan I Tamir, Gaiane M Rauch, Thomas E Yankeelov","doi":"10.1148/ryai.240124","DOIUrl":"10.1148/ryai.240124","url":null,"abstract":"<p><p>Purpose To combine deep learning and biology-based modeling to predict the response of locally advanced, triple-negative breast cancer before initiating neoadjuvant chemotherapy (NAC). Materials and Methods In this retrospective study, a biology-based mathematical model of tumor response to NAC was constructed and calibrated on a patient-specific basis using imaging data from patients enrolled in the MD Anderson A Robust TNBC Evaluation FraMework to Improve Survival trial (ARTEMIS; ClinicalTrials.gov registration no. NCT02276443) between April 2018 and May 2021. To relate the calibrated parameters in the biology-based model and pretreatment MRI data, a convolutional neural network (CNN) was employed. The CNN predictions of the calibrated model parameters were used to estimate tumor response at the end of NAC. CNN performance in the estimations of total tumor volume (TTV), total tumor cellularity (TTC), and tumor status was evaluated. Model-predicted TTC and TTV measurements were compared with MRI-based measurements using the concordance correlation coefficient and area under the receiver operating characteristic curve (for predicting pathologic complete response at the end of NAC). Results The study included 118 female patients (median age, 51 years [range, 29-78 years]). For comparison of CNN predicted to measured change in TTC and TTV over the course of NAC, the concordance correlation coefficient values were 0.95 (95% CI: 0.90, 0.98) and 0.94 (95% CI: 0.87, 0.97), respectively. CNN-predicted TTC and TTV had an area under the receiver operating characteristic curve of 0.72 (95% CI: 0.34, 0.94) and 0.72 (95% CI: 0.40, 0.95) for predicting tumor status at the time of surgery, respectively. Conclusion Deep learning integrated with a biology-based mathematical model showed good performance in predicting the spatial and temporal evolution of a patient's tumor during NAC using only pre-NAC MRI data. <b>Keywords:</b> Triple-Negative Breast Cancer, Neoadjuvant Chemotherapy, Convolutional Neural Network, Biology-based Mathematical Model <i>Supplemental material is available for this article.</i> Clinical trial registration no. NCT02276443 ©RSNA, 2024 See also commentary by Mei and Huang in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240124"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Machine Learning Model to Harmonize Volumetric Brain MRI Data for Quantitative Neuroradiologic Assessment of Alzheimer Disease. 一种机器学习模型用于协调体积脑MRI数据,用于阿尔茨海默病的定量神经放射学评估。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240030
Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby
{"title":"A Machine Learning Model to Harmonize Volumetric Brain MRI Data for Quantitative Neuroradiologic Assessment of Alzheimer Disease.","authors":"Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby","doi":"10.1148/ryai.240030","DOIUrl":"10.1148/ryai.240030","url":null,"abstract":"<p><p>Purpose To extend a previously developed machine learning algorithm for harmonizing brain volumetric data of individuals undergoing neuroradiologic assessment of Alzheimer disease not encountered during model training. Materials and Methods Neuroharmony is a recently developed method that uses image quality metrics as predictors to remove scanner-related effects in brain-volumetric data using random forest regression. To account for the interactions between Alzheimer disease pathology and image quality metrics during harmonization, the authors developed a multiclass extension of Neuroharmony for individuals with and without cognitive impairment. Cross-validation experiments were performed to benchmark performance against other available strategies using data from 20 864 participants with and without cognitive impairment, spanning 11 prospective and retrospective cohorts and 43 scanners. Evaluation metrics assessed the ability to remove scanner-related variations in brain volumes (marker concordance between scanner pairs) while retaining the ability to delineate different diagnostic groups (preserving disease-related signal). Results For each strategy, marker concordances between scanners were significantly better (<i>P</i> < .001) compared with preharmonized data. The proposed multiclass model achieved significantly higher concordance (mean, 0.75 ± 0.09 [SD]) than the Neuroharmony model trained on individuals without cognitive impairment (mean, 0.70 ± 0.11) and preserved disease-related signal (∆AUC [area under the receiver operating characteristic curve] = -0.006 ± 0.027) better than the Neuroharmony model trained on individuals with and without cognitive impairment that did not use the proposed extension (∆AUC = -0.091 ± 0.036). The marker concordance was better in scanners seen during training (concordance > 0.97) than unseen (concordance < 0.79), independent of cognitive status. Conclusion In a large-scale multicenter dataset, the proposed multiclass Neuroharmony model outperformed other available strategies for harmonizing brain volumetric data from unseen scanners in a clinical setting. <b>Keywords:</b> Image Postprocessing, MR Imaging, Dementia, Random Forest <i>Supplemental material is available for this article.</i> Published under a CC BY 4.0 license See also commentary by Haller in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240030"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography. 利用 CT 血管造影检测、分割和形态分析颅内动脉瘤的集成深度学习模型。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240017
Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu
{"title":"Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography.","authors":"Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu","doi":"10.1148/ryai.240017","DOIUrl":"10.1148/ryai.240017","url":null,"abstract":"<p><p>Purpose To develop a deep learning model for the morphologic measurement of unruptured intracranial aneurysms (UIAs) based on CT angiography (CTA) data and validate its performance using a multicenter dataset. Materials and Methods In this retrospective study, patients with CTA examinations, including those with and without UIAs, in a tertiary referral hospital from February 2018 to February 2021 were included as the training dataset. Patients with UIAs who underwent CTA at multiple centers between April 2021 and December 2022 were included as the multicenter external testing set. An integrated deep learning (IDL) model was developed for UIA detection, segmentation, and morphologic measurement using an nnU-Net algorithm. Model performance was evaluated using the Dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC), with measurements by senior radiologists serving as the reference standard. The ability of the IDL model to improve performance of junior radiologists in measuring morphologic UIA features was assessed. Results The study included 1182 patients with UIAs and 578 controls without UIAs as the training dataset (median age, 55 years [IQR, 47-62 years], 1012 [57.5%] female) and 535 patients with UIAs as the multicenter external testing set (median age, 57 years [IQR, 50-63 years], 353 [66.0%] female). The IDL model achieved 97% accuracy in detecting UIAs and achieved a DSC of 0.90 (95% CI: 0.88, 0.92) for UIA segmentation. Model-based morphologic measurements showed good agreement with reference standard measurements (all ICCs > 0.85). Within the multicenter external testing set, the IDL model also showed agreement with reference standard measurements (all ICCs > 0.80). Junior radiologists assisted by the IDL model showed significantly improved performance in measuring UIA size (ICC improved from 0.88 [95% CI: 0.80, 0.92] to 0.96 [95% CI: 0.92, 0.97], <i>P</i> < .001). Conclusion The developed integrated deep learning model using CTA data showed good performance in UIA detection, segmentation, and morphologic measurement and may be used to assist less experienced radiologists in morphologic analysis of UIAs. <b>Keywords:</b> Segmentation, CT Angiography, Head/Neck, Aneurysms, Comparative Studies <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Wang in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240017"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans. SCIseg:在 T2 加权磁共振成像扫描中自动分割脊髓损伤的髓内病变。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240005
Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad
{"title":"SCIseg: Automatic Segmentation of Intramedullary Lesions in Spinal Cord Injury on T2-weighted MRI Scans.","authors":"Enamundram Naga Karthik, Jan Valošek, Andrew C Smith, Dario Pfyffer, Simon Schading-Sassenhausen, Lynn Farner, Kenneth A Weber, Patrick Freund, Julien Cohen-Adad","doi":"10.1148/ryai.240005","DOIUrl":"10.1148/ryai.240005","url":null,"abstract":"<p><p>Purpose To develop a deep learning tool for the automatic segmentation of the spinal cord and intramedullary lesions in spinal cord injury (SCI) on T2-weighted MRI scans. Materials and Methods This retrospective study included MRI data acquired between July 2002 and February 2023. The data consisted of T2-weighted MRI scans acquired using different scanner manufacturers with various image resolutions (isotropic and anisotropic) and orientations (axial and sagittal). Patients had different lesion etiologies (traumatic, ischemic, and hemorrhagic) and lesion locations across the cervical, thoracic, and lumbar spine. A deep learning model, SCIseg (which is open source and accessible through the Spinal Cord Toolbox, version 6.2 and above), was trained in a three-phase process involving active learning for the automatic segmentation of intramedullary SCI lesions and the spinal cord. The segmentations from the proposed model were visually and quantitatively compared with those from three other open-source methods (PropSeg, DeepSeg, and contrast-agnostic, all part of the Spinal Cord Toolbox). The Wilcoxon signed rank test was used to compare quantitative MRI biomarkers of SCI (lesion volume, lesion length, and maximal axial damage ratio) derived from the manual reference standard lesion masks and biomarkers obtained automatically with SCIseg segmentations. Results The study included 191 patients with SCI (mean age, 48.1 years ± 17.9 [SD]; 142 [74%] male patients). SCIseg achieved a mean Dice score of 0.92 ± 0.07 and 0.61 ± 0.27 for spinal cord and SCI lesion segmentation, respectively. There was no evidence of a difference between lesion length (<i>P</i> = .42) and maximal axial damage ratio (<i>P</i> = .16) computed from manually annotated lesions and the lesion segmentations obtained using SCIseg. Conclusion SCIseg accurately segmented intramedullary lesions on a diverse dataset of T2-weighted MRI scans and automatically extracted clinically relevant lesion characteristics. <b>Keywords:</b> Spinal Cord, Trauma, Segmentation, MR Imaging, Supervised Learning, Convolutional Neural Network (CNN) Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240005"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastMRI Breast: A Publicly Available Radial k-Space Dataset of Breast Dynamic Contrast-enhanced MRI. FastMRI乳房:一个公开可用的乳房动态增强MRI径向k空间数据集。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240345
Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock
{"title":"FastMRI Breast: A Publicly Available Radial k-Space Dataset of Breast Dynamic Contrast-enhanced MRI.","authors":"Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock","doi":"10.1148/ryai.240345","DOIUrl":"10.1148/ryai.240345","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240345"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11791504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy of Fully Automated and Human-assisted Artificial Intelligence-based CT Quantification of Pleural Effusion Changes after Thoracentesis. 全自动和人工智能辅助 CT 定量胸腔穿刺术后胸腔积液变化的准确性。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240215
Eui Jin Hwang, Hyunsook Hong, Seungyeon Ko, Seung-Jin Yoo, Hyungjin Kim, Dahee Kim, Soon Ho Yoon
{"title":"Accuracy of Fully Automated and Human-assisted Artificial Intelligence-based CT Quantification of Pleural Effusion Changes after Thoracentesis.","authors":"Eui Jin Hwang, Hyunsook Hong, Seungyeon Ko, Seung-Jin Yoo, Hyungjin Kim, Dahee Kim, Soon Ho Yoon","doi":"10.1148/ryai.240215","DOIUrl":"10.1148/ryai.240215","url":null,"abstract":"<p><p>Quantifying pleural effusion change at chest CT is important for evaluating disease severity and treatment response. The purpose of this study was to assess the accuracy of artificial intelligence (AI)-based volume quantification of pleural effusion change on CT images, using the volume of drained fluid as the reference standard. Seventy-nine participants (mean age ± SD, 65 years ± 13; 47 male) undergoing thoracentesis were prospectively enrolled from October 2021 to September 2023. Chest CT scans were obtained just before and after thoracentesis. The volume of pleural fluid on each CT scan, with the difference representing the drained fluid volume, was measured by automated segmentation (fully automated measurement). An expert thoracic radiologist then manually corrected these automated volume measurements (human-assisted measurement). Both fully automated (median percentage error, 13.1%; maximum estimated 95% error, 708 mL) and human-assisted measurements (median percentage error, 10.9%; maximum estimated 95% error, 312 mL) systematically underestimated the volume of drained fluid, beyond the equivalence margin. The magnitude of underestimation increased proportionally to the drainage volume. Agreements between fully automated and human-assisted measurements (intraclass correlation coefficient [ICC], 0.99) and the test-retest reliability of fully automated (ICC, 0.995) and human-assisted (ICC, 0.997) measurements were excellent. These results highlight a potential systematic discrepancy between AI segmentation-based CT quantification of pleural effusion volume change and actual volume change. <b>Keywords:</b> CT-Quantitative, Thorax, Pleura, Segmentation Clinical Research Information Service registration no. KCT0006683 <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240215"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning to Harmonize Interscanner Variability of Brain MRI Volumetry: Why and How. 机器学习协调脑MRI体积测量的扫描仪间变异性:为什么和如何。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240779
Sven Haller
{"title":"Machine Learning to Harmonize Interscanner Variability of Brain MRI Volumetry: Why and How.","authors":"Sven Haller","doi":"10.1148/ryai.240779","DOIUrl":"https://doi.org/10.1148/ryai.240779","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 1","pages":"e240779"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143013115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Conformal Prediction to a Deep Learning Model for Intracranial Hemorrhage Detection to Improve Trustworthiness. 在颅内出血检测的深度学习模型中应用共形预测,提高可信度。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-11-27 DOI: 10.1148/ryai.240032
Cooper Gamble, Shahriar Faghani, Bradley J Erickson
{"title":"Applying Conformal Prediction to a Deep Learning Model for Intracranial Hemorrhage Detection to Improve Trustworthiness.","authors":"Cooper Gamble, Shahriar Faghani, Bradley J Erickson","doi":"10.1148/ryai.240032","DOIUrl":"https://doi.org/10.1148/ryai.240032","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To apply conformal prediction to a deep learning (DL) model for intracranial hemorrhage (ICH) detection and evaluate model performance in detection as well as model accuracy in identifying challenging cases. Materials and Methods This was a retrospective (November 2017 through December 2017) study of 491 noncontrast head CT volumes from the CQ500 dataset in which three senior radiologists annotated sections containing ICH. The dataset was split into definite and challenging (uncertain) subsets, where challenging images were defined as those in which there was disagreement among readers. A DL model was trained on 146 patients (mean age = 45.7, 70 females, 76 males) from the definite data (training dataset) to perform ICH localization and classification into five classes. To develop an uncertainty-aware DL model, 1,546 sections of the definite data (calibration dataset) was used for Mondrian conformal prediction (MCP). The uncertainty-aware DL model was tested on 8,401 definite and challenging sections to assess its ability to identify challenging sections. The difference in predictive performance (<i>P</i> value) and ability to identify challenging sections (accuracy) were reported. Results After the MCP procedure, the model achieved an F1 score of 0.920 for ICH classification on the test dataset. Additionally, it correctly identified 6,837 of the 6,856 total challenging sections as challenging (99.7% accuracy). It did not incorrectly label any definite sections as challenging. Conclusion The uncertainty-aware MCP-augmented DL model achieved high performance in ICH detection and high accuracy in identifying challenging sections, suggesting its usefulness in automated ICH detection and potential to increase trustworthiness of DL models in radiology. ©RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240032"},"PeriodicalIF":8.1,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142732987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信