Journal of the American College of Radiology : JACR最新文献

筛选
英文 中文
Impact of Artificial Intelligence Triage on Radiologist Report Turnaround Time: Real-World Time Savings and Insights From Model Predictions. 人工智能分类对放射科医生报告周转时间的影响:现实世界的时间节省和模型预测的见解。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-29 DOI: 10.1016/j.jacr.2025.07.033
Yee Lam Elim Thompson, Jonathan Fergus, Jonathan Chung, Jana G Delfino, Weijie Chen, Gary M Levine, Frank W Samuelson
{"title":"Impact of Artificial Intelligence Triage on Radiologist Report Turnaround Time: Real-World Time Savings and Insights From Model Predictions.","authors":"Yee Lam Elim Thompson, Jonathan Fergus, Jonathan Chung, Jana G Delfino, Weijie Chen, Gary M Levine, Frank W Samuelson","doi":"10.1016/j.jacr.2025.07.033","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.07.033","url":null,"abstract":"<p><strong>Objective: </strong>To quantify the impact of workflow parameters on time savings in report turnaround time due to an AI triage device that prioritized pulmonary embolism (PE) in chest CT pulmonary angiography (CTPA) examinations.</p><p><strong>Methods: </strong>This retrospective study analyzed 11,252 adult CTPA examinations conducted for suspected PE at a single tertiary academic medical center. Data was divided into two periods: pre-artificial intelligence (AI) and post-AI. For PE-positive examinations, turnaround time (TAT)-defined as the duration from patient scan completion to the first preliminary report completion-was compared between the two periods. Time savings were reported separately for work-hour and off-hour cohorts. To characterize radiologist workflow, 527,234 records were retrieved from the PACS and workflow parameters such as examination interarrival time and radiologist read time extracted. These parameters were input into a computational model to predict time savings after deployment of an AI triage device and to study the impact of workflow parameters.</p><p><strong>Results: </strong>The pre-AI dataset included 4,694 chest CTPA examinations with 13.3% being PE-positive. The post-AI dataset comprised 6,558 examinations with 16.2% being PE-positive. The mean TAT for pre-AI and post-AI during work hours are 68.9 (95% confidence interval 55.0-82.8) and 46.7 (38.1-55.2) min, respectively, and those during off-hours are 44.8 (33.7-55.9) and 42.0 (33.6-50.3) min. Clinically observed time savings during work hours (22.2 [95% confidence interval: 5.85-38.6] min) were significant (P = .004), while off-hour (2.82 [-11.1 to 16.7] min) were not (P = .345). Observed time savings aligned with model predictions (29.6 [95% range: 23.2-38.1] min for work hours; 2.10 [1.76, 2.58] min for off-hours).</p><p><strong>Discussion: </strong>Consideration and quantification of the clinical workflow contributes to the accurate assessment of the expected time savings in report TAT after deployment of an AI triage device.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145202239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Strengthening the Evidence Base for Interpretation-Centric LLM Integration in Radiology Education. 强化以解释为中心的法学硕士融入放射学教育的证据基础。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-26 DOI: 10.1016/j.jacr.2025.07.036
Deniz Esin Tekcan Sanli, Ahmet Necati Sanli
{"title":"Strengthening the Evidence Base for Interpretation-Centric LLM Integration in Radiology Education.","authors":"Deniz Esin Tekcan Sanli, Ahmet Necati Sanli","doi":"10.1016/j.jacr.2025.07.036","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.07.036","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145187799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning for Standardized Head CT Reformatting: A Quantitative Analysis of Image Quality and Operator Variability. 标准化头部CT重格式化的深度学习:图像质量和算子可变性的定量分析。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-23 DOI: 10.1016/j.jacr.2025.09.016
Peter D Chang, Eleanor Chu, David Floriolli, Jennifer Soun, David Fussell
{"title":"Deep Learning for Standardized Head CT Reformatting: A Quantitative Analysis of Image Quality and Operator Variability.","authors":"Peter D Chang, Eleanor Chu, David Floriolli, Jennifer Soun, David Fussell","doi":"10.1016/j.jacr.2025.09.016","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.09.016","url":null,"abstract":"<p><strong>Purpose: </strong>To validate a deep learning foundation model for automated head computed tomography (CT) reformatting and to quantify the quality, speed, and variability of conventional manual reformats in a real-world dataset.</p><p><strong>Methods: </strong>A foundation artificial intelligence (AI) model was used to create automated reformats for 1,763 consecutive non-contrast head CT examinations. Model accuracy was first validated on a 100-exam subset by assessing landmark detection as well as rotational, centering, and zoom error against expert manual annotations. The validated model was subsequently used as a reference standard to evaluate the quality and speed of the original technician-generated reformats from the full dataset.</p><p><strong>Results: </strong>The AI model demonstrated high concordance with expert annotations, with a mean landmark localization error of 0.6-0.9 mm. Compared to expert-defined planes, AI-generated reformats exhibited a mean rotational error of 0.7 degrees, a mean centering error of 0.3%, and a mean zoom error of 0.4%. By contrast, technician-generated reformats demonstrated a mean rotational error of 11.2 degrees, a mean centering error of 6.4%, and a mean zoom error of 6.2%. Significant variability in manual reformat quality was observed across different factors including patient age, scanner location, report findings, and individual technician operators.</p><p><strong>Conclusion: </strong>Manual head CT reformatting is subject to substantial variability in both quality and speed. A single-shot deep learning foundation model can generate reformats with high accuracy and consistency. The implementation of such an automated method offers the potential to improve standardization, increase workflow efficiency, and reduce operational costs in clinical practice.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145152135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From GPS to ChatGPT in radiology… Dumb and Dumber? 从GPS到放射学中的ChatGPT…《阿呆与阿瓜》
Journal of the American College of Radiology : JACR Pub Date : 2025-09-20 DOI: 10.1016/j.jacr.2025.09.014
Teodoro Martín-Noguerol, Pilar López-Úbeda, Antonio Luna
{"title":"From GPS to ChatGPT in radiology… Dumb and Dumber?","authors":"Teodoro Martín-Noguerol, Pilar López-Úbeda, Antonio Luna","doi":"10.1016/j.jacr.2025.09.014","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.09.014","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiologist Interaction with AI-Generated Preliminary Reports: A Longitudinal Multi-Reader Study. 放射科医生与人工智能生成的初步报告的互动:一项纵向多读者研究。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-20 DOI: 10.1016/j.jacr.2025.09.015
Eun Kyoung Hong, Chong-Hyun Suh, Monika Nukala, Azadehsadat Esfahani, Andro Licaros, Rachna Madan, Andetta Hunsaker, Mark Hammer
{"title":"Radiologist Interaction with AI-Generated Preliminary Reports: A Longitudinal Multi-Reader Study.","authors":"Eun Kyoung Hong, Chong-Hyun Suh, Monika Nukala, Azadehsadat Esfahani, Andro Licaros, Rachna Madan, Andetta Hunsaker, Mark Hammer","doi":"10.1016/j.jacr.2025.09.015","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.09.015","url":null,"abstract":"<p><strong>Objectives: </strong>To investigate the integration of multimodal AI-generated reports into radiology workflow over time, focusing on their impact on efficiency, acceptability, and report quality.</p><p><strong>Methods: </strong>A multicase, multireader study involved 756 publicly available chest radiographs interpreted by five radiologists using preliminary reports generated by a radiology-specific multimodal AI model, divided into seven sequential batches of 108 radiographs each. Two thoracic radiologists assessed the final reports using RADPEER criteria for agreement and 5-point Likert scale for quality. Reading times, rate of acceptance without modification, agreement, and quality scores were measured, with statistical analyses evaluating trends across seven sequential batches.</p><p><strong>Results: </strong>Radiologists' reading times for chest radiographs decreased from 25.8 seconds in Batch 1 to 19.3 seconds in Batch 7 (p < .001). Acceptability increased from 54.6% to 60.2% (p < .001), with normal chest radiographs demonstrating high rates (68.9%) compared to abnormal chest radiographs (52.6%; p < .001). Median agreement and quality scores remained stable for normal chest radiographs but varied significantly for abnormal chest radiographs (ps < .05).</p><p><strong>Discussion: </strong>The introduction of AI-generated reports improved efficiency of chest radiograph interpretation, acceptability increased over time. However, agreement and quality scores showed variability, particularly in abnormal cases, emphasizing the need for oversight in the interpretation of complex chest radiographs.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Green Imaging: Scoping Review of Radiology's Environmental Impact. 绿色成像:放射学对环境影响的范围综述。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-18 DOI: 10.1016/j.jacr.2025.09.013
Sean A Woolen, Marisa Martin, Colby A Foster, Mark P MacEachern, Katherine E Maturen
{"title":"Green Imaging: Scoping Review of Radiology's Environmental Impact.","authors":"Sean A Woolen, Marisa Martin, Colby A Foster, Mark P MacEachern, Katherine E Maturen","doi":"10.1016/j.jacr.2025.09.013","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.09.013","url":null,"abstract":"<p><strong>Objective: </strong>To summarize evidence for the environmental impact of radiology services and identify research gaps.</p><p><strong>Methods: </strong>A scoping review was conducted following PRISMA-ScR guidelines. Searches were performed in Ovid Medline, Web of Science, Embase, and Scopus from inception to 6/6/2025. Studies were included if they reported environmental outcomes from diagnostic imaging or image-guided procedures in humans. Two reviewers independently screened studies and extracted data. Conference abstracts, narrative reviews, editorials, non-English articles, and studies without primary data were excluded. Data were charted by environmental impact type and summarized using descriptive statistics and narrative synthesis.</p><p><strong>Results: </strong>Initial searches yielded 2,730 citations, with 115 studies included. Publications spanned 1971-2025, primarily from Europe (44%) and the U.S. (25%). Most were observational; only 8% (9/115) employed life cycle analysis (LCA). Key domains included energy use (27%), nuclear medicine waste (25%), and contrast media waste (14%). Reported annual CO<sub>2</sub> emissions for equipment varied by modality: MRI (53.1±13.2 MT), CT (12.6±2.9 MT), IR (9.6±1.0 MT), fluoroscopy (4.8 MT), radiography (0.7±0.4 MT), workstations (0.7±0.2 MT), and ultrasound (0.3 MT). Per-scan LCA estimates ranged widely: MRI (6.2-76.2 kg), CT (1.1-13.4 kg), ultrasound (0.1-1.2 kg), and radiography (0.7-7.0 kg). Radionuclides and contrast agents were frequently detected in wastewater and ecosystems. Key research gaps include inconsistent methods, limited LCA use, underexplored modalities and informatics, insufficient waste mitigation studies, and lack of cross-specialty carbon assessments.</p><p><strong>Conclusion: </strong>Among thousands of publications on imaging sustainability, few provide primary data. This review consolidates evidence on radiology's environmental impact and outlines priorities for future research.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145103082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Clarification on the Methodology for Identifying Practicing Interventional Radiologists and Calculating Work RVUs. 关于认定执业介入放射科医师及计算工作rvu方法的澄清。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-12 DOI: 10.1016/j.jacr.2025.07.035
Zohaa Faiz, Julie Bulman, Ammar Sarwar
{"title":"Clarification on the Methodology for Identifying Practicing Interventional Radiologists and Calculating Work RVUs.","authors":"Zohaa Faiz, Julie Bulman, Ammar Sarwar","doi":"10.1016/j.jacr.2025.07.035","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.07.035","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145066286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
National Adoption of Artificial Intelligence Software in Medicare Among Radiologists. 全国放射科医师在医疗保险中采用人工智能软件。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-11 DOI: 10.1016/j.jacr.2025.09.011
Elsa Zhang, Michael Dang, Joseph H Joo, Ching-Ching Claire Lin, Joshua M Liao
{"title":"National Adoption of Artificial Intelligence Software in Medicare Among Radiologists.","authors":"Elsa Zhang, Michael Dang, Joseph H Joo, Ching-Ching Claire Lin, Joshua M Liao","doi":"10.1016/j.jacr.2025.09.011","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.09.011","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145058761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comment on "Imaging Utilization Differences After Telemedicine Versus In-Person Visits". 对“远程医疗与上门就诊后影像利用差异”的评论。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-11 DOI: 10.1016/j.jacr.2025.07.034
Rachana Mehta, Ranjana Sah
{"title":"Comment on \"Imaging Utilization Differences After Telemedicine Versus In-Person Visits\".","authors":"Rachana Mehta, Ranjana Sah","doi":"10.1016/j.jacr.2025.07.034","DOIUrl":"10.1016/j.jacr.2025.07.034","url":null,"abstract":"","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145058740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Large Language Models to Enhance Radiology Report Readability: A Systematic Review. 利用大型语言模型提高放射学报告的可读性:系统回顾。
Journal of the American College of Radiology : JACR Pub Date : 2025-09-11 DOI: 10.1016/j.jacr.2025.09.004
Vasant Patwardhan, Divya Balchander, David Fussell, John Joseph, Aditya Joshi, Hayden Troutt, Justin Ling, Katherine Wei, Brent Weinberg, Daniel Chow
{"title":"Leveraging Large Language Models to Enhance Radiology Report Readability: A Systematic Review.","authors":"Vasant Patwardhan, Divya Balchander, David Fussell, John Joseph, Aditya Joshi, Hayden Troutt, Justin Ling, Katherine Wei, Brent Weinberg, Daniel Chow","doi":"10.1016/j.jacr.2025.09.004","DOIUrl":"https://doi.org/10.1016/j.jacr.2025.09.004","url":null,"abstract":"<p><strong>Background: </strong>Patients increasingly have direct access to their medical record. Radiology reports are complex and difficult for patients to understand and contextualize. One solution is to use large language models (LLMs) to translate reports into patient-accessible language. Objective This review summarizes the existing literature on using LLMs for the simplification of patient radiology reports. We also propose guidelines for best practices in future studies.</p><p><strong>Evidence acquisition: </strong>A systematic review was performed following PRISMA guidelines. Studies published and indexed using PubMed, Scopus, and Google Scholar up to February 2025 were included. Inclusion criteria comprised of studies that used large language models for simplification of diagnostic or interventional radiology reports for patients and evaluated readability. Exclusion criteria included non-English manuscripts, abstracts, conference presentations, review articles, retracted articles, and studies that did not focus on report simplification. The Mixed Methods Appraisal tool (MMAT) 2018 was used for bias assessment. Given the diversity of results, studies were categorized based on reporting methods, and qualitative and quantitative findings were presented to summarize key insights.</p><p><strong>Evidence synthesis: </strong>A total of 2126 citations were identified and 17 were included in the qualitative analysis. 71% of studies utilized a single LLM, while 29% of studies utilized multiple LLMs. The most prevalent LLMs included ChatGPT, Google Bard/Gemini, Bing Chat, Claude, and Microsoft Copilot. All studies that assessed quantitative readability metrics (n=12) reported improvements. Assessment of simplified reports via qualitative methods demonstrated varied results with physician vs non-physician raters.</p><p><strong>Conclusion and clinical impact: </strong>LLMs demonstrate the potential to enhance the accessibility of radiology reports for patients, but the literature is limited by heterogeneity of inputs, models, and evaluation metrics across existing studies. We propose a set of best practice guidelines to standardize future LLM research.</p>","PeriodicalId":73968,"journal":{"name":"Journal of the American College of Radiology : JACR","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145058723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信