Radiology-Artificial Intelligence最新文献

筛选
英文 中文
NNFit: A Self-Supervised Deep Learning Method for Accelerated Quantification of High- Resolution Short Echo Time MR Spectroscopy Datasets.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-15 DOI: 10.1148/ryai.230579
Alexander S Giuffrida, Sulaiman Sheriff, Vicki Huang, Brent D Weinberg, Lee A D Cooper, Yuan Liu, Brian J Soher, Michael Treadway, Andrew A Maudsley, Hyunsuk Shim
{"title":"NNFit: A Self-Supervised Deep Learning Method for Accelerated Quantification of High- Resolution Short Echo Time MR Spectroscopy Datasets.","authors":"Alexander S Giuffrida, Sulaiman Sheriff, Vicki Huang, Brent D Weinberg, Lee A D Cooper, Yuan Liu, Brian J Soher, Michael Treadway, Andrew A Maudsley, Hyunsuk Shim","doi":"10.1148/ryai.230579","DOIUrl":"https://doi.org/10.1148/ryai.230579","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate the performance of NNFit, a self-supervised deep-learning method for quantification of high-resolution short echo-time (TE) echo-planar spectroscopic imaging (EPSI) datasets, with the goal of addressing the computational bottleneck of conventional spectral quantification methods in the clinical workflow. Materials and Methods This retrospective study included 89 short-TE whole-brain EPSI/GRAPPA scans from clinical trials for glioblastoma (Trial 1, May 2014-October 2018) and major-depressive-disorder (Trial 2, 2022- 2023). The training dataset included 685k spectra from 20 participants (60 scans) in Trial 1. The testing-dataset included 115k spectra from 5 participants (13 scans) in Trial 1 and 145k spectra from 7 participants (16 scans) in Trial 2. A comparative analysis was performed between NNFit and a widely used parametric-modeling spectral quantitation method (FITT). Metabolite maps generated by each method were compared using the structural- similarity-index-measure (SSIM) and linear-correlation-coefficient (R<sup>2</sup>). Radiation treatment volumes for glioblastoma based on the metabolite maps were compared with the Dice-coefficient and a two-tailed <i>t</i> test. Results Average SSIM and <i>R</i><sup>2</sup> scores for Trial 1 test set data were 0.91/0.90 (choline), 0.93/0.93 (creatine), 0.93/0.93 (<i>n</i>-acetylaspartate), 0.80/0.72 (myo-inositol), and 0.59/0.47 (glutamate + glutamine). Average scores for Trial 2 test set data were 0.95/0.95, 0.98/0.97, 0.98/0.98, 0.92/0.92, and 0.79/0.81 respectively. The treatment volumes had average Dice coefficient of 0.92. NNFit's average processing time was 90.1 seconds, whereas FITT took 52.9 minutes on average. Conclusion This study demonstrates that a deep learning approach to spectral quantitation offers comparable performance to conventional quantification methods for EPSI data, but with faster processing at short-TE. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230579"},"PeriodicalIF":8.1,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Posttraining Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-15 DOI: 10.1148/ryai.240353
Tobias Weber, Jakob Dexl, David Rügamer, Michael Ingrisch
{"title":"Posttraining Network Compression for 3D Medical Image Segmentation: Reducing Computational Efforts via Tucker Decomposition.","authors":"Tobias Weber, Jakob Dexl, David Rügamer, Michael Ingrisch","doi":"10.1148/ryai.240353","DOIUrl":"https://doi.org/10.1148/ryai.240353","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To investigate whether the computational effort of 3D CT-based multiorgan segmentation with TotalSegmentator can be reduced via Tucker decomposition-based network compression. Materials and Methods In this retrospective study, Tucker decomposition was applied to the convolutional kernels of the TotalSegmentator model, an nnU-Net model trained on a comprehensive CT dataset for automatic segmentation of 117 anatomic structures. The proposed approach reduced the floating-point operations (FLOPs) and memory required during inference, offering an adjustable trade-off between computational efficiency and segmentation quality. This study utilized the publicly available TotalSegmentator dataset containing 1228 segmented CTs and a test subset of 89 CTs, employing various downsampling factors to explore the relationship between model size, inference speed, and segmentation accuracy, evaluated using the Dice score. Results The application of Tucker decomposition to the TotalSegmentator model substantially reduced the model parameters and FLOPs across various compression ratios, with limited loss in segmentation accuracy. Up to 88% of the model's parameters were removed, with no evidence of differences in performance compared with the original model for 113 of 117 classes after fine-tuning. Practical benefits varied across different graphics processing unit architectures, with more distinct speed-ups on less powerful hardware. Conclusion The study demonstrates that posthoc network compression via Tucker decomposition presents a viable strategy for reducing the computational demand of medical image segmentation models without substantially impacting model accuracy. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240353"},"PeriodicalIF":8.1,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-15 DOI: 10.1148/ryai.230544
Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun
{"title":"A Serial MRI-based Deep Learning Model to Predict Survival in Patients with Locoregionally Advanced Nasopharyngeal Carcinoma.","authors":"Jia Kou, Jun-Yi Peng, Wen-Bing Lv, Chen-Fei Wu, Zi-Hang Chen, Guan-Qun Zhou, Ya-Qin Wang, Li Lin, Li-Jun Lu, Ying Sun","doi":"10.1148/ryai.230544","DOIUrl":"https://doi.org/10.1148/ryai.230544","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate a deep learning-based prognostic model for predicting survival in locoregionally- advanced nasopharyngeal carcinoma (LA-NPC) using serial MRI before and after induction chemotherapy (IC). Materials and Methods This multicenter retrospective study included 1039 LA-NPC patients (779 male, 260 female, mean age 44 [standard deviation: 11]) diagnosed between April 2009 and December 2015. A radiomics- clinical prognostic model (Model RC) was developed using pre-and post-IC MRI and other clinical factors using graph convolutional neural networks (GCN). The concordance index (C-index) was used to evaluate model performance in predicting disease-free survival (DFS). The survival benefits of concurrent chemoradiation therapy (CCRT) were analyzed in model-defined risk groups. Results The C-indexes of Model RC for predicting DFS were significantly higher than those of TNM staging in the internal (0.79 versus 0.53) and external (0.79 versus 0.62, both <i>P</i> < .001) testing cohorts. The 5-year DFS for the Model RC-defined low-risk group was significantly better than that of the high-risk group (90.6% versus 58.9%, <i>P</i> < .001). In high-risk patients, those who received CCRT had a higher 5-year DFS rate than those who did not (58.7% versus 28.6%, <i>P</i> = .03). There was no evidence of a difference in 5-year DFS rate in low-risk patients who did or did not receive CCRT (91.9% versus 81.3%, <i>P</i> = .19). Conclusion Serial MRI before and after IC can effectively predict survival in LA-NPC. The radiomics-clinical prognostic model developed using a GCN-based deep learning method showed good risk discrimination capabilities and may facilitate risk-adaptive therapy. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230544"},"PeriodicalIF":8.1,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy of Fully Automated and Human-assisted AI-based CT Quantification of Pleural Effusion Changes after Thoracentesis.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-15 DOI: 10.1148/ryai.240215
Eui Jin Hwang, Hyunsook Hong, Seungyeon Ko, Seung-Jin Yoo, Hyungjin Kim, Dahee Kim, Soon Ho Yoon
{"title":"Accuracy of Fully Automated and Human-assisted AI-based CT Quantification of Pleural Effusion Changes after Thoracentesis.","authors":"Eui Jin Hwang, Hyunsook Hong, Seungyeon Ko, Seung-Jin Yoo, Hyungjin Kim, Dahee Kim, Soon Ho Yoon","doi":"10.1148/ryai.240215","DOIUrl":"https://doi.org/10.1148/ryai.240215","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Quantifying pleural effusion change on chest CT is important for evaluating disease severity and treatment response. The purpose of this study was to assess the accuracy of artificial intelligence (AI)-based volume quantification of pleural effusion change on CT images, using the volume of drained fluid as the reference standard. Seventy-nine participants (mean age, 65 ± [SD] 13 years; 47 male) undergoing thoracentesis were prospectively enrolled from October 2021 to September 2023. Chest CTs were obtained just before and after thoracentesis. The volume of pleural fluid on each CT scan, with the difference representing the drained fluid volume, was measured by automated segmentation (fully-automated measurement). An expert thoracic radiologist then manually corrected these automated volume measurements (human-assisted measurement). Both fully-automated (median percentage error, 13.1%; maximum estimated 95% error range, 708 mL) and human-assisted measurements (median percentage error, 10.9%; maximum estimated 95% error range, 312 mL) systematically underestimated the volume of drained fluid, beyond the equivalence margin. The magnitude of underestimation increased proportionally to the drainage volume. Agreement between fully-automated and human-assisted measurements (intraclass correlation coefficient [ICC], 0.99), and the test-retest reliability of fully-automated (ICC, 0.995) and human-assisted (ICC, 0.997) measurements were excellent. These results highlight a potential systematic discrepancy between AI segmentation- based CT quantification of pleural effusion volume change and actual volume change. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240215"},"PeriodicalIF":8.1,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the Impact of Changes in AI-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-15 DOI: 10.1148/ryai.230597
Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant
{"title":"Evaluating the Impact of Changes in AI-derived Case Scores over Time on Digital Breast Tomosynthesis Screening Outcomes.","authors":"Samantha P Zuckerman, Senthil Periaswamy, Julie L Shisler, Ameena Elahi, Christine E Edmonds, Jeffrey Hoffmeister, Emily F Conant","doi":"10.1148/ryai.230597","DOIUrl":"https://doi.org/10.1148/ryai.230597","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate the change in DBT-AI (digital breast tomosynthesis-artificial intelligence) case scores over sequential screens. Materials and Methods This retrospective review included 21,108 female patients (mean age, 58.1 ± [SD] 11.5 years) with 31,741 DBT screening examinations performed at a single site from 2/3/2020 to 9/12/2022. Among 7,000 patients with two or more DBT-AI screenings, 1,799 had one year follow up and were included in the analysis. DBT-AI case scores and differences in case score over time were determined. Case scores ranged from 0-100. For each screening outcome (true positive (TP), false positive (FP), true negative (TN), false negative (FN)), mean and median case score change was calculated. Results The highest average case score was seen in TP examinations (average 75, range 7-100, <i>n</i> = 41), and the lowest average case score was seen in TN examinations (average 34, range 0-100, <i>n</i> = 1640). The largest positive case score change was seen in TP examinations (mean case score change 21.1, median case score change 17). FN examinations included mammographically occult cancers diagnosed following supplemental screening and those found on symptomatic diagnostic imaging. Differences between TP and TN mean case score change (<i>P</i> < .001) and between TP and FP mean case score change (<i>P</i> = .02) were statistically significant. Conclusion Using the combination of DBT-AI case score with change in case score over time may help radiologists make recall decisions in DBT screening. All studies with high case score and/or case score changes should be carefully scrutinized to maximize screening performance. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e230597"},"PeriodicalIF":8.1,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FastMRI Breast: A Publicly Available Radial K-space Dataset of Breast Dynamic Contrast-enhanced MRI.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-08 DOI: 10.1148/ryai.240345
Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock
{"title":"FastMRI Breast: A Publicly Available Radial K-space Dataset of Breast Dynamic Contrast-enhanced MRI.","authors":"Eddy Solomon, Patricia M Johnson, Zhengguo Tan, Radhika Tibrewala, Yvonne W Lui, Florian Knoll, Linda Moy, Sungheon Gene Kim, Laura Heacock","doi":"10.1148/ryai.240345","DOIUrl":"https://doi.org/10.1148/ryai.240345","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> The fastMRI breast dataset is the first large-scale dataset of radial k-space and DICOM data for breast dynamic contrast-enhanced MRI with case-level labels. Its public availability aims to advance fast and quantitative machine learning research. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240345"},"PeriodicalIF":8.1,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes. RSNA 2023 腹部创伤人工智能挑战回顾与结果分析。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240334
Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak
{"title":"RSNA 2023 Abdominal Trauma AI Challenge: Review and Outcomes.","authors":"Sebastiaan Hermans, Zixuan Hu, Robyn L Ball, Hui Ming Lin, Luciano M Prevedello, Ferco H Berger, Ibrahim Yusuf, Jeffrey D Rudie, Maryam Vazirabad, Adam E Flanders, George Shih, John Mongan, Savvas Nicolaou, Brett S Marinelli, Melissa A Davis, Kirti Magudia, Ervin Sejdić, Errol Colak","doi":"10.1148/ryai.240334","DOIUrl":"10.1148/ryai.240334","url":null,"abstract":"<p><p>Purpose To evaluate the performance of the winning machine learning models from the 2023 RSNA Abdominal Trauma Detection AI Challenge. Materials and Methods The competition was hosted on Kaggle and took place between July 26 and October 15, 2023. The multicenter competition dataset consisted of 4274 abdominal trauma CT scans, in which solid organs (liver, spleen, and kidneys) were annotated as healthy, low-grade, or high-grade injury. Studies were labeled as positive or negative for the presence of bowel and mesenteric injury and active extravasation. In this study, performances of the eight award-winning models were retrospectively assessed and compared using various metrics, including the area under the receiver operating characteristic curve (AUC), for each injury category. The reported mean values of these metrics were calculated by averaging the performance across all models for each specified injury type. Results The models exhibited strong performance in detecting solid organ injuries, particularly high-grade injuries. For binary detection of injuries, the models demonstrated mean AUC values of 0.92 (range, 0.90-0.94) for liver, 0.91 (range, 0.87-0.93) for splenic, and 0.94 (range, 0.93-0.95) for kidney injuries. The models achieved mean AUC values of 0.98 (range, 0.96-0.98) for high-grade liver, 0.98 (range, 0.97-0.99) for high-grade splenic, and 0.98 (range, 0.97-0.98) for high-grade kidney injuries. For the detection of bowel and mesenteric injuries and active extravasation, the models demonstrated mean AUC values of 0.85 (range, 0.74-0.93) and 0.85 (range, 0.79-0.89), respectively. Conclusion The award-winning models from the artificial intelligence challenge demonstrated strong performance in the detection of traumatic abdominal injuries on CT scans, particularly high-grade injuries. These models may serve as a performance baseline for future investigations and algorithms. <b>Keywords:</b> Abdominal Trauma, CT, American Association for the Surgery of Trauma, Machine Learning, Artificial Intelligence <i>Supplemental material is available for this article.</i> © RSNA, 2024.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240334"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240696
Shuncong Wang
{"title":"Using Artificial Intelligence to Improve Diagnosis of Unruptured Intracranial Aneurysms.","authors":"Shuncong Wang","doi":"10.1148/ryai.240696","DOIUrl":"https://doi.org/10.1148/ryai.240696","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 1","pages":"e240696"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142984832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography. 利用 CT 血管造影检测、分割和形态分析颅内动脉瘤的集成深度学习模型。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-01-01 DOI: 10.1148/ryai.240017
Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu
{"title":"Integrated Deep Learning Model for the Detection, Segmentation, and Morphologic Analysis of Intracranial Aneurysms Using CT Angiography.","authors":"Yi Yang, Zhenyao Chang, Xin Nie, Jun Wu, Jingang Chen, Weiqi Liu, Hongwei He, Shuo Wang, Chengcheng Zhu, Qingyuan Liu","doi":"10.1148/ryai.240017","DOIUrl":"10.1148/ryai.240017","url":null,"abstract":"<p><p>Purpose To develop a deep learning model for the morphologic measurement of unruptured intracranial aneurysms (UIAs) based on CT angiography (CTA) data and validate its performance using a multicenter dataset. Materials and Methods In this retrospective study, patients with CTA examinations, including those with and without UIAs, in a tertiary referral hospital from February 2018 to February 2021 were included as the training dataset. Patients with UIAs who underwent CTA at multiple centers between April 2021 and December 2022 were included as the multicenter external testing set. An integrated deep learning (IDL) model was developed for UIA detection, segmentation, and morphologic measurement using an nnU-Net algorithm. Model performance was evaluated using the Dice similarity coefficient (DSC) and intraclass correlation coefficient (ICC), with measurements by senior radiologists serving as the reference standard. The ability of the IDL model to improve performance of junior radiologists in measuring morphologic UIA features was assessed. Results The study included 1182 patients with UIAs and 578 controls without UIAs as the training dataset (median age, 55 years [IQR, 47-62 years], 1012 [57.5%] female) and 535 patients with UIAs as the multicenter external testing set (median age, 57 years [IQR, 50-63 years], 353 [66.0%] female). The IDL model achieved 97% accuracy in detecting UIAs and achieved a DSC of 0.90 (95% CI: 0.88, 0.92) for UIA segmentation. Model-based morphologic measurements showed good agreement with reference standard measurements (all ICCs > 0.85). Within the multicenter external testing set, the IDL model also showed agreement with reference standard measurements (all ICCs > 0.80). Junior radiologists assisted by the IDL model showed significantly improved performance in measuring UIA size (ICC improved from 0.88 [95% CI: 0.80, 0.92] to 0.96 [95% CI: 0.92, 0.97], <i>P</i> < .001). Conclusion The developed integrated deep learning model using CTA data showed good performance in UIA detection, segmentation, and morphologic measurement and may be used to assist less experienced radiologists in morphologic analysis of UIAs. <b>Keywords:</b> Segmentation, CT Angiography, Head/Neck, Aneurysms, Comparative Studies <i>Supplemental material is available for this article.</i> © RSNA, 2024 See also the commentary by Wang in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240017"},"PeriodicalIF":8.1,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Machine Learning Model to Harmonize Volumetric Brain MRI Data for Quantitative Neuroradiological Assessment of Alzheimer Disease.
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2024-12-18 DOI: 10.1148/ryai.240030
Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby
{"title":"A Machine Learning Model to Harmonize Volumetric Brain MRI Data for Quantitative Neuroradiological Assessment of Alzheimer Disease.","authors":"Damiano Archetti, Vikram Venkatraghavan, Béla Weiss, Pierrick Bourgeat, Tibor Auer, Zoltán Vidnyánszky, Stanley Durrleman, Wiesje M van der Flier, Frederik Barkhof, Daniel C Alexander, Andre Altmann, Alberto Redolfi, Betty M Tijms, Neil P Oxtoby","doi":"10.1148/ryai.240030","DOIUrl":"https://doi.org/10.1148/ryai.240030","url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To extend a previously developed machine learning algorithm for harmonizing brain volumetric data of individuals undergoing neuroradiological assessment of Alzheimer disease not encountered during model training. Materials and Methods Neuroharmony is a recently developed method that uses image quality metrics (IQM) as predictors to remove scanner-related effects in brain-volumetric data using random forest regression. To account for the interactions between Alzheimer disease pathology and IQM during harmonization, the authors developed a multiclass extension of Neuroharmony for individuals with and without cognitive impairment. Cross-validation experiments were performed to benchmark performance against other available strategies using data from 20,864 participants with and without cognitive impairment, spanning 11 prospective and retrospective cohorts and 43 scanners. Evaluation metrics assessed the ability to remove scanner-related variations in brain volumes (marker concordance between scanner pairs), while retaining the ability to delineate different diagnostic groups (preserving disease-related signal). Results For each strategy, marker concordances between scanners were significantly better (<i>P</i> < .001) compared with preharmonized data. The proposed multiclass model achieved significantly higher concordance (0.75 ± 0.09) than the Neuroharmony model trained on individuals without cognitive impairment (0.70 ± 0.11) and preserved disease-related signal (<i>∆AUC</i> =-0.006 ± 0.027) better than the Neuroharmony model trained on individuals with and without cognitive impairment that did not use our proposed extension (∆<i>AUC</i> =-0.091 ± 0.036). The marker concordance was better in scanners seen during training (concordance > 0.97) than unseen (concordance < 0.79), independently of cognitive status. Conclusion In a large-scale multicenter dataset, our proposed multiclass Neuroharmony model outperformed other available strategies for harmonizing brain volumetric data from unseen scanners in a clinical setting. Published under a CC BY 4.0 license.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240030"},"PeriodicalIF":8.1,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信