ChatGPT: A Useful Tool for Medical Students in Radiology Education?

IF 1.2 Q4 MEDICINE, RESEARCH & EXPERIMENTAL
Clinical Teacher Pub Date : 2025-10-02 DOI:10.1111/tct.70220
Musab Sirag, Brian M. Moloney
{"title":"ChatGPT: A Useful Tool for Medical Students in Radiology Education?","authors":"Musab Sirag,&nbsp;Brian M. Moloney","doi":"10.1111/tct.70220","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Large language models (LLMs) such as ChatGPT are increasingly being explored as educational tools in medical education, particularly in radiology. This study evaluated the accuracy of ChatGPT in recommending appropriate imaging investigations across diverse clinical scenarios, with a focus on its potential as an educational tool for medical students and junior doctors.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>ChatGPT-4 (March 2024 version) was presented with a 12-case questionnaire derived from the American College of Radiology's Appropriateness Criteria (ACR-AC). One topic was selected from each of 10 diagnostic sections and two from the interventional section. The model's recommendations were compared with those published by the ACR-AC, which are based on expert consensus. The same questionnaire was also completed by 160 final-year medical students and junior doctors, and their collective performance was compared to ChatGPT.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>ChatGPT achieved a 100% concordance rate (12/12 scenarios) with expert panel recommendations. In contrast, the student/doctor cohort achieved a 68.0% concordance rate. The difference was statistically significant (<i>p</i> &lt; 0.05).</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>ChatGPT demonstrated high accuracy in recommending appropriate imaging investigations in a structured, guideline-based setting. These findings suggest that LLMs may serve as a valuable adjunct in radiology education, particularly in supporting imaging decision making among less experienced clinicians. However, further validation in real-world clinical environments is warranted.</p>\n </section>\n </div>","PeriodicalId":47324,"journal":{"name":"Clinical Teacher","volume":"22 6","pages":""},"PeriodicalIF":1.2000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Teacher","FirstCategoryId":"1085","ListUrlMain":"https://asmepublications.onlinelibrary.wiley.com/doi/10.1111/tct.70220","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Large language models (LLMs) such as ChatGPT are increasingly being explored as educational tools in medical education, particularly in radiology. This study evaluated the accuracy of ChatGPT in recommending appropriate imaging investigations across diverse clinical scenarios, with a focus on its potential as an educational tool for medical students and junior doctors.

Methods

ChatGPT-4 (March 2024 version) was presented with a 12-case questionnaire derived from the American College of Radiology's Appropriateness Criteria (ACR-AC). One topic was selected from each of 10 diagnostic sections and two from the interventional section. The model's recommendations were compared with those published by the ACR-AC, which are based on expert consensus. The same questionnaire was also completed by 160 final-year medical students and junior doctors, and their collective performance was compared to ChatGPT.

Results

ChatGPT achieved a 100% concordance rate (12/12 scenarios) with expert panel recommendations. In contrast, the student/doctor cohort achieved a 68.0% concordance rate. The difference was statistically significant (p < 0.05).

Conclusions

ChatGPT demonstrated high accuracy in recommending appropriate imaging investigations in a structured, guideline-based setting. These findings suggest that LLMs may serve as a valuable adjunct in radiology education, particularly in supporting imaging decision making among less experienced clinicians. However, further validation in real-world clinical environments is warranted.

Abstract Image

ChatGPT:医学生放射学教育的有用工具?
背景:像ChatGPT这样的大型语言模型(llm)越来越多地被探索作为医学教育的教育工具,特别是在放射学方面。本研究评估了ChatGPT在不同临床情况下推荐适当影像学检查的准确性,重点是其作为医学生和初级医生教育工具的潜力。方法:ChatGPT-4(2024年3月版)提交了一份来自美国放射学会适当性标准(ACR-AC)的12例问卷。从10个诊断部分各选择一个主题,从介入性部分选择两个主题。该模型的建议与ACR-AC发表的建议进行了比较,后者是基于专家共识的。160名医学生和初级医生也完成了同样的问卷,并将他们的集体表现与ChatGPT进行比较。结果:ChatGPT与专家小组建议的一致性率达到100%(12/12个场景)。相比之下,学生/医生队列的一致性率为68.0%。结论:ChatGPT在结构化的、基于指南的设置中推荐适当的影像学调查显示出很高的准确性。这些发现表明,法学硕士可以作为放射学教育的一个有价值的辅助手段,特别是在支持经验不足的临床医生的影像学决策方面。然而,在真实的临床环境中进一步验证是必要的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Clinical Teacher
Clinical Teacher MEDICINE, RESEARCH & EXPERIMENTAL-
CiteScore
2.90
自引率
5.60%
发文量
113
期刊介绍: The Clinical Teacher has been designed with the active, practising clinician in mind. It aims to provide a digest of current research, practice and thinking in medical education presented in a readable, stimulating and practical style. The journal includes sections for reviews of the literature relating to clinical teaching bringing authoritative views on the latest thinking about modern teaching. There are also sections on specific teaching approaches, a digest of the latest research published in Medical Education and other teaching journals, reports of initiatives and advances in thinking and practical teaching from around the world, and expert community and discussion on challenging and controversial issues in today"s clinical education.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信