Radiology-Artificial Intelligence最新文献

筛选
英文 中文
Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI. 无监督深度学习在弥漫性胶质瘤血脑屏障渗漏检测中的应用。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240507
Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim
{"title":"Unsupervised Deep Learning for Blood-Brain Barrier Leakage Detection in Diffuse Glioma Using Dynamic Contrast-enhanced MRI.","authors":"Joon Jang, Kyu Sung Choi, Junhyeok Lee, Hyochul Lee, Inpyeong Hwang, Jung Hyun Park, Jin Wook Chung, Seung Hong Choi, Hyeonjin Kim","doi":"10.1148/ryai.240507","DOIUrl":"10.1148/ryai.240507","url":null,"abstract":"<p><p>Purpose To develop an unsupervised deep learning framework for generalizable blood-brain barrier leakage detection using dynamic contrast-enhanced MRI, without requiring pharmacokinetic models and arterial input function estimation. Materials and Methods This retrospective study included data from patients who underwent dynamic contrast-enhanced MRI between April 2010 and December 2020. An autoencoder-based anomaly detection approach identified one-dimensional voxel-wise time-series abnormal signals through reconstruction residuals, separating them into residual leakage signals (RLSs) and residual vascular signals. The RLS maps were evaluated and compared with the volume transfer constant (<i>K</i><sup>trans</sup>) using the structural similarity index and correlation coefficient. Generalizability was tested on subsampled data, and isocitrate dehydrogenase (<i>IDH</i>) status classification performance was assessed using area under the receiver operating characteristic curve (AUC). Results A total of 274 patients (mean age, 54.4 years ± 14.6 [SD]; 164 male) were included in the study. RLS showed high structural similarity (structural similarity index, 0.91 ± 0.02) and correlation (<i>r</i> = 0.56; <i>P</i> < .001) with <i>K</i><sup>trans</sup>. On subsampled data, RLS maps showed better correlation with RLS values from the original data (0.89 vs 0.72; <i>P</i> < .001), higher peak signal-to-noise ratio (33.09 dB vs 28.94 dB; <i>P</i> < .001), and higher structural similarity index (0.92 vs 0.87; <i>P</i> < .001) compared with <i>K</i><sup>trans</sup> maps. RLS maps also outperformed <i>K</i><sup>trans</sup> maps in predicting <i>IDH</i> mutation status (AUC, 0.87 [95% CI: 0.83, 0.91] vs 0.81 [95% CI: 0.76, 0.85]; <i>P</i> = .02). Conclusion The unsupervised framework effectively detected blood-brain barrier leakage without pharmacokinetic models and arterial input function. <b>Keywords:</b> Dynamic Contrast-enhanced MRI, Unsupervised Learning, Feature Detection, Blood-Brain Barrier Leakage Detection <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Júdice de Mattos Farina and Kuriki in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240507"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143764378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial Intelligence Is Brittle: We Need to Do Better. 人工智能很脆弱:我们需要做得更好。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250081
Abhinav Suri, William Hsu
{"title":"Artificial Intelligence Is Brittle: We Need to Do Better.","authors":"Abhinav Suri, William Hsu","doi":"10.1148/ryai.250081","DOIUrl":"10.1148/ryai.250081","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250081"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127952/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143812643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway. 两种基于深度学习的人工智能模型在筛查乳房x光片上的乳腺癌检测和定位性能
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240039
Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind
{"title":"Performance of Two Deep Learning-based AI Models for Breast Cancer Detection and Localization on Screening Mammograms from BreastScreen Norway.","authors":"Marit A Martiniussen, Marthe Larsen, Tone Hovda, Merete U Kristiansen, Fredrik A Dahl, Line Eikvil, Olav Brautaset, Atle Bjørnerud, Vessela Kristensen, Marie B Bergan, Solveig Hofvind","doi":"10.1148/ryai.240039","DOIUrl":"10.1148/ryai.240039","url":null,"abstract":"<p><p>Purpose To evaluate cancer detection and marker placement accuracy of two artificial intelligence (AI) models developed for interpretation of screening mammograms. Materials and Methods This retrospective study included data from 129 434 screening examinations (all female patients; mean age, 59.2 years ± 5.8 [SD]) performed between January 2008 and December 2018 in BreastScreen Norway. Model A was commercially available and model B was an in-house model. Area under the receiver operating characteristic curve (AUC) with 95% CIs were calculated. The study defined 3.2% and 11.1% of the examinations with the highest AI scores as positive, threshold 1 and 2, respectively. A radiologic review assessed location of AI markings and classified interval cancers as true or false negative. Results The AUC value was 0.93 (95% CI: 0.92, 0.94) for model A and B when including screen-detected and interval cancers. Model A identified 82.5% (611 of 741) of the screen-detected cancers at threshold 1 and 92.4% (685 of 741) at threshold 2. Model B identified 81.8% (606 of 741) at threshold 1 and 93.7% (694 of 741) at threshold 2. The AI markings were correctly localized for all screen-detected cancers identified by both models and 82% (56 of 68) of the interval cancers for model A and 79% (54 of 68) for model B. At the review, 21.6% (45 of 208) of the interval cancers were identified at the preceding screening by either or both models, correctly localized and classified as false negative (<i>n</i> = 17) or with minimal signs of malignancy (<i>n</i> = 28). Conclusion Both AI models showed promising performance for cancer detection on screening mammograms. The AI markings corresponded well to the true cancer locations. <b>Keywords:</b> Breast, Mammography, Screening, Computed-aided Diagnosis <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Cadrin-Chênevert in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240039"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One System to Rule Them All? Task- and Data-specific Considerations for Automated Data Extraction. 用一个系统来统治所有人?自动数据提取的任务和数据特定考虑因素。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250175
Ali S Tejani, Andreas M Rauschecker
{"title":"One System to Rule Them All? Task- and Data-specific Considerations for Automated Data Extraction.","authors":"Ali S Tejani, Andreas M Rauschecker","doi":"10.1148/ryai.250175","DOIUrl":"10.1148/ryai.250175","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250175"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144018982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Seeing the Unseen: How Unsupervised Learning Can Predict Genetic Mutations from Radiologic Images. 看到看不见的:无监督学习如何从放射图像预测基因突变。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250243
Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki
{"title":"Seeing the Unseen: How Unsupervised Learning Can Predict Genetic Mutations from Radiologic Images.","authors":"Eduardo Moreno Júdice de Mattos Farina, Paulo Eduardo de Aguiar Kuriki","doi":"10.1148/ryai.250243","DOIUrl":"10.1148/ryai.250243","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250243"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144001837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Dual-Task Deep Learning for Automated Thyroid Cancer Triaging at Screening US. 自适应双任务深度学习用于筛查美国甲状腺癌的自动分类。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240271
Shao-Hong Wu, Ming-De Li, Wen-Juan Tong, Yi-Hao Liu, Rui Cui, Jin-Bo Hu, Mei-Qing Cheng, Wei-Ping Ke, Xin-Xin Lin, Jia-Yi Lv, Long-Zhong Liu, Jie Ren, Guang-Jian Liu, Hong Yang, Wei Wang
{"title":"Adaptive Dual-Task Deep Learning for Automated Thyroid Cancer Triaging at Screening US.","authors":"Shao-Hong Wu, Ming-De Li, Wen-Juan Tong, Yi-Hao Liu, Rui Cui, Jin-Bo Hu, Mei-Qing Cheng, Wei-Ping Ke, Xin-Xin Lin, Jia-Yi Lv, Long-Zhong Liu, Jie Ren, Guang-Jian Liu, Hong Yang, Wei Wang","doi":"10.1148/ryai.240271","DOIUrl":"10.1148/ryai.240271","url":null,"abstract":"<p><p>Purpose To develop an adaptive dual-task deep learning model (ThyNet-S) for detection and classification of thyroid lesions at US screening. Materials and Methods This retrospective study used a multicenter dataset comprising 35 008 thyroid US images of 23 294 individual examinations (mean age, 40.4 years ± 13.1 [SD]; 17 587 female) from seven medical centers from January 2009 to December 2021. Of these, 29 004 images were used for model development and 6004 images for validation. The model determined cancer risk for each image and automatically triaged images with normal thyroid and benign nodules by dynamically integrating lesion detection through pixel-level feature analysis and lesion classification through deep semantic features analysis. Diagnostic performance of screening assisted by the model (ThyNet-S triaged screening) and traditional screening (radiologists alone) was assessed by comparing sensitivity, specificity, accuracy, and area under the receiver operating characteristic curve using the McNemar test and DeLong test. The influence of ThyNet-S on radiologist workload and clinical decision-making was also assessed. Results ThyNet-S-assisted triaged screening achieved a higher area under the receiver operating characteristic curve than original screening with six senior and six junior radiologists (0.93 vs 0.91 and 0.92 vs 0.88, respectively; all <i>P</i> < .001). The model improved sensitivity for junior radiologists (88.2% vs 86.8%; <i>P</i> < .001). Notably, the model reduced radiologists' workload by triaging 60.4% of cases as not potentially malignant, which did not require further interpretation. The model simultaneously decreased the unnecessary fine needle aspiration rate from 38.7% to 14.9% and 11.5% when used independently or in combination with the Thyroid Imaging Reporting and Data System, respectively. Conclusion ThyNet-S improved the efficiency of thyroid cancer screening and optimized clinical decision-making. <b>Keywords:</b> Artificial Intelligence, Adaptive, Dual Task, Thyroid Cancer, Screening, Ultrasound <i>Supplemental material is available for this article.</i> © RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240271"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143812639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pixels to Prognosis: Using Deep Learning to Rethink Cardiac Risk Prediction from CT Angiography. 像素到预后:利用深度学习重新思考CT血管造影的心脏风险预测。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.250260
Rohit Reddy
{"title":"Pixels to Prognosis: Using Deep Learning to Rethink Cardiac Risk Prediction from CT Angiography.","authors":"Rohit Reddy","doi":"10.1148/ryai.250260","DOIUrl":"10.1148/ryai.250260","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":"7 3","pages":"e250260"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144162461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network. 基于纵向感知分割网络的儿童霍奇金淋巴瘤系列PET/CT图像自动量化。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240229
Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw
{"title":"Automatic Quantification of Serial PET/CT Images for Pediatric Hodgkin Lymphoma Using a Longitudinally Aware Segmentation Network.","authors":"Xin Tie, Muheon Shin, Changhee Lee, Scott B Perlman, Zachary Huemann, Amy J Weisman, Sharon M Castellino, Kara M Kelly, Kathleen M McCarten, Adina L Alazraki, Junjie Hu, Steve Y Cho, Tyler J Bradshaw","doi":"10.1148/ryai.240229","DOIUrl":"10.1148/ryai.240229","url":null,"abstract":"<p><p>Purpose To develop a longitudinally aware segmentation network (LAS-Net) that can quantify serial PET/CT images for pediatric patients with Hodgkin lymphoma. Materials and Methods This retrospective study included baseline (PET1) and interim (PET2) PET/CT images from 297 pediatric patients enrolled in two Children's Oncology Group clinical trials (AHOD1331 and AHOD0831). The internal dataset included 200 patients (enrolled between March 2015 and August 2019; median age, 15.4 years [range, 5.6-22.0 years]; 107 male), and the external testing dataset included 97 patients (enrolled between December 2009 and January 2012; median age, 15.8 years [range, 5.2-21.4 years]; 59 male). LAS-Net incorporates longitudinal cross-attention, allowing relevant features from PET1 to inform the analysis of PET2. The model's lesion segmentation performance on PET1 images was evaluated using Dice coefficients, and lesion detection performance on PET2 images was evaluated using F1 scores. In addition, quantitative PET metrics, including metabolic tumor volume (MTV) and total lesion glycolysis (TLG) in PET1, as well as qPET and percentage difference between baseline and interim maximum standardized uptake value (∆SUV<sub>max</sub>) in PET2, were extracted and compared against physician-derived measurements. Agreement between model and physician-derived measurements was quantified using Spearman correlation, and bootstrap resampling was used for statistical analysis. Results LAS-Net detected residual lymphoma on PET2 scans with an F1 score of 0.61 (precision/recall: 0.62/0.60), outperforming all comparator methods (<i>P</i> < .01). For baseline segmentation, LAS-Net achieved a mean Dice score of 0.77. In PET quantification, LAS-Net's measurements of qPET, ∆SUV<sub>max</sub>, MTV, and TLG were strongly correlated with physician measurements, with Spearman ρ values of 0.78, 0.80, 0.93, and 0.96, respectively. The quantification performance remained high, with a slight decrease, in an external testing cohort. Conclusion LAS-Net demonstrated significant improvements in quantifying PET metrics across serial scans in pediatric patients with Hodgkin lymphoma, highlighting the value of longitudinal awareness in evaluating multi-time-point imaging datasets. <b>Keywords:</b> Pediatrics, PET/CT, Lymphoma, Segmentation, Quantification, Supervised Learning, Convolutional Neural Network (CNN), Quantitative PET, Longitudinal Analysis, Deep Learning, Image Segmentation <i>Supplemental material is available for this article.</i> Clinical trial registration no. NCT02166463 and NCT01026220 © RSNA, 2025 See also commentary by Khosravi and Gichoya in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240229"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Pipeline for Automated Quality Control of Chest Radiographs. 胸片自动质量控制的流水线。
IF 13.2
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240003
Ian A Selby, Eduardo González Solares, Anna Breger, Michael Roberts, Lorena Escudero Sánchez, Judith Babar, James H F Rudd, Nicholas A Walton, Evis Sala, Carola-Bibiane Schönlieb, Jonathan R Weir-McCall
{"title":"A Pipeline for Automated Quality Control of Chest Radiographs.","authors":"Ian A Selby, Eduardo González Solares, Anna Breger, Michael Roberts, Lorena Escudero Sánchez, Judith Babar, James H F Rudd, Nicholas A Walton, Evis Sala, Carola-Bibiane Schönlieb, Jonathan R Weir-McCall","doi":"10.1148/ryai.240003","DOIUrl":"10.1148/ryai.240003","url":null,"abstract":"","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240003"},"PeriodicalIF":13.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12127945/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143558237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Large Language Models with Retrieval-Augmented Generation: A Radiology-Specific Approach. 用检索增强生成增强大型语言模型:一种放射学专用方法。
IF 8.1
Radiology-Artificial Intelligence Pub Date : 2025-05-01 DOI: 10.1148/ryai.240313
Dane A Weinert, Andreas M Rauschecker
{"title":"Enhancing Large Language Models with Retrieval-Augmented Generation: A Radiology-Specific Approach.","authors":"Dane A Weinert, Andreas M Rauschecker","doi":"10.1148/ryai.240313","DOIUrl":"10.1148/ryai.240313","url":null,"abstract":"<p><p>Retrieval-augmented generation (RAG) is a strategy to improve the performance of large language models (LLMs) by providing an LLM with an updated corpus of knowledge that can be used for answer generation in real time. RAG may improve LLM performance and clinical applicability in radiology by providing citable, up-to-date information without requiring model fine-tuning. In this retrospective study, a radiology-specific RAG system was developed using a vector database of 3689 <i>RadioGraphics</i> articles published from January 1999 to December 2023. Performance of five LLMs with (RAG-Systems) and without RAG on a 192-question radiology examination was compared. RAG significantly improved examination scores for GPT-4 (OpenAI; 81.2% vs 75.5%, <i>P</i> = .04) and Command R+ (Cohere; 70.3% vs 62.0%, <i>P</i> = .02), but not for Claude Opus (Anthropic), Mixtral (Mistral AI), or Gemini 1.5 Pro (Google DeepMind). RAG-Systems performed significantly better than pure LLMs on a 24-question subset directly sourced from <i>RadioGraphics</i> (85% vs 76%, <i>P</i> = .03). RAG-Systems retrieved 21 of 24 (87.5%, <i>P</i> < .001) relevant <i>RadioGraphics</i> references cited in the examination's answer explanations and successfully cited them in 18 of 21 (85.7%, <i>P</i> < .001) outputs. The results suggest that RAG is a promising approach to enhance LLM capabilities for radiology knowledge tasks, providing transparent, domain-specific information retrieval. <b>Keywords:</b> Computer Applications-General (Informatics), Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Mansuri and Gichoya in this issue.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240313"},"PeriodicalIF":8.1,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143606543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信