{"title":"ChatGPT:医学生放射学教育的有用工具?","authors":"Musab Sirag, Brian M. Moloney","doi":"10.1111/tct.70220","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Large language models (LLMs) such as ChatGPT are increasingly being explored as educational tools in medical education, particularly in radiology. This study evaluated the accuracy of ChatGPT in recommending appropriate imaging investigations across diverse clinical scenarios, with a focus on its potential as an educational tool for medical students and junior doctors.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>ChatGPT-4 (March 2024 version) was presented with a 12-case questionnaire derived from the American College of Radiology's Appropriateness Criteria (ACR-AC). One topic was selected from each of 10 diagnostic sections and two from the interventional section. The model's recommendations were compared with those published by the ACR-AC, which are based on expert consensus. The same questionnaire was also completed by 160 final-year medical students and junior doctors, and their collective performance was compared to ChatGPT.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>ChatGPT achieved a 100% concordance rate (12/12 scenarios) with expert panel recommendations. In contrast, the student/doctor cohort achieved a 68.0% concordance rate. The difference was statistically significant (<i>p</i> < 0.05).</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>ChatGPT demonstrated high accuracy in recommending appropriate imaging investigations in a structured, guideline-based setting. These findings suggest that LLMs may serve as a valuable adjunct in radiology education, particularly in supporting imaging decision making among less experienced clinicians. However, further validation in real-world clinical environments is warranted.</p>\n </section>\n </div>","PeriodicalId":47324,"journal":{"name":"Clinical Teacher","volume":"22 6","pages":""},"PeriodicalIF":1.2000,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ChatGPT: A Useful Tool for Medical Students in Radiology Education?\",\"authors\":\"Musab Sirag, Brian M. Moloney\",\"doi\":\"10.1111/tct.70220\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>Large language models (LLMs) such as ChatGPT are increasingly being explored as educational tools in medical education, particularly in radiology. This study evaluated the accuracy of ChatGPT in recommending appropriate imaging investigations across diverse clinical scenarios, with a focus on its potential as an educational tool for medical students and junior doctors.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>ChatGPT-4 (March 2024 version) was presented with a 12-case questionnaire derived from the American College of Radiology's Appropriateness Criteria (ACR-AC). One topic was selected from each of 10 diagnostic sections and two from the interventional section. The model's recommendations were compared with those published by the ACR-AC, which are based on expert consensus. The same questionnaire was also completed by 160 final-year medical students and junior doctors, and their collective performance was compared to ChatGPT.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>ChatGPT achieved a 100% concordance rate (12/12 scenarios) with expert panel recommendations. In contrast, the student/doctor cohort achieved a 68.0% concordance rate. The difference was statistically significant (<i>p</i> < 0.05).</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>ChatGPT demonstrated high accuracy in recommending appropriate imaging investigations in a structured, guideline-based setting. These findings suggest that LLMs may serve as a valuable adjunct in radiology education, particularly in supporting imaging decision making among less experienced clinicians. However, further validation in real-world clinical environments is warranted.</p>\\n </section>\\n </div>\",\"PeriodicalId\":47324,\"journal\":{\"name\":\"Clinical Teacher\",\"volume\":\"22 6\",\"pages\":\"\"},\"PeriodicalIF\":1.2000,\"publicationDate\":\"2025-10-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Clinical Teacher\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://asmepublications.onlinelibrary.wiley.com/doi/10.1111/tct.70220\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"MEDICINE, RESEARCH & EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Teacher","FirstCategoryId":"1085","ListUrlMain":"https://asmepublications.onlinelibrary.wiley.com/doi/10.1111/tct.70220","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"MEDICINE, RESEARCH & EXPERIMENTAL","Score":null,"Total":0}
ChatGPT: A Useful Tool for Medical Students in Radiology Education?
Background
Large language models (LLMs) such as ChatGPT are increasingly being explored as educational tools in medical education, particularly in radiology. This study evaluated the accuracy of ChatGPT in recommending appropriate imaging investigations across diverse clinical scenarios, with a focus on its potential as an educational tool for medical students and junior doctors.
Methods
ChatGPT-4 (March 2024 version) was presented with a 12-case questionnaire derived from the American College of Radiology's Appropriateness Criteria (ACR-AC). One topic was selected from each of 10 diagnostic sections and two from the interventional section. The model's recommendations were compared with those published by the ACR-AC, which are based on expert consensus. The same questionnaire was also completed by 160 final-year medical students and junior doctors, and their collective performance was compared to ChatGPT.
Results
ChatGPT achieved a 100% concordance rate (12/12 scenarios) with expert panel recommendations. In contrast, the student/doctor cohort achieved a 68.0% concordance rate. The difference was statistically significant (p < 0.05).
Conclusions
ChatGPT demonstrated high accuracy in recommending appropriate imaging investigations in a structured, guideline-based setting. These findings suggest that LLMs may serve as a valuable adjunct in radiology education, particularly in supporting imaging decision making among less experienced clinicians. However, further validation in real-world clinical environments is warranted.
期刊介绍:
The Clinical Teacher has been designed with the active, practising clinician in mind. It aims to provide a digest of current research, practice and thinking in medical education presented in a readable, stimulating and practical style. The journal includes sections for reviews of the literature relating to clinical teaching bringing authoritative views on the latest thinking about modern teaching. There are also sections on specific teaching approaches, a digest of the latest research published in Medical Education and other teaching journals, reports of initiatives and advances in thinking and practical teaching from around the world, and expert community and discussion on challenging and controversial issues in today"s clinical education.