ChatGPT performance on radiation technologist and therapist entry to practice exams

IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Ryan Duggan , Kaitlyn M. Tsuruda
{"title":"ChatGPT performance on radiation technologist and therapist entry to practice exams","authors":"Ryan Duggan ,&nbsp;Kaitlyn M. Tsuruda","doi":"10.1016/j.jmir.2024.04.019","DOIUrl":null,"url":null,"abstract":"<div><h3>Background</h3><p>The aim of this study was to describe the proficiency of ChatGPT (GPT-4) on certification style exams from the Canadian Association of Medical Radiation Technologists (CAMRT), and describe its performance across multiple exam attempts.</p></div><div><h3>Methods</h3><p>ChatGPT was prompted with questions from CAMRT practice exams in the disciplines of radiological technology, magnetic resonance (MRI), nuclear medicine and radiation therapy (87-98 questions each). ChatGPT attempted each exam five times. Exam performance was evaluated using descriptive statistics, stratified by discipline and question type (knowledge, application, critical thinking). Light's Kappa was used to assess agreement in answers across attempts.</p></div><div><h3>Results</h3><p>Using a passing grade of 65 %, ChatGPT passed the radiological technology exam only once (20 %), MRI all five times (100 %), nuclear medicine three times (60 %), and radiation therapy all five times (100 %). ChatGPT's performance was best on knowledge questions across all disciplines except radiation therapy. It performed worst on critical thinking questions. Agreement in ChatGPT's responses across attempts was substantial within the disciplines of radiological technology, MRI, and nuclear medicine, and almost perfect for radiation therapy.</p></div><div><h3>Conclusion</h3><p>ChatGPT (GPT-4) was able to pass certification style exams for radiation technologists and therapists, but its performance varied between disciplines. The algorithm demonstrated substantial to almost perfect agreement in the responses it provided across multiple exam attempts. Future research evaluating ChatGPT's performance on standardized tests should consider using repeated measures.</p></div>","PeriodicalId":46420,"journal":{"name":"Journal of Medical Imaging and Radiation Sciences","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S193986542400122X/pdfft?md5=4da848fd7c61e04179d80181078074fe&pid=1-s2.0-S193986542400122X-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Imaging and Radiation Sciences","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S193986542400122X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background

The aim of this study was to describe the proficiency of ChatGPT (GPT-4) on certification style exams from the Canadian Association of Medical Radiation Technologists (CAMRT), and describe its performance across multiple exam attempts.

Methods

ChatGPT was prompted with questions from CAMRT practice exams in the disciplines of radiological technology, magnetic resonance (MRI), nuclear medicine and radiation therapy (87-98 questions each). ChatGPT attempted each exam five times. Exam performance was evaluated using descriptive statistics, stratified by discipline and question type (knowledge, application, critical thinking). Light's Kappa was used to assess agreement in answers across attempts.

Results

Using a passing grade of 65 %, ChatGPT passed the radiological technology exam only once (20 %), MRI all five times (100 %), nuclear medicine three times (60 %), and radiation therapy all five times (100 %). ChatGPT's performance was best on knowledge questions across all disciplines except radiation therapy. It performed worst on critical thinking questions. Agreement in ChatGPT's responses across attempts was substantial within the disciplines of radiological technology, MRI, and nuclear medicine, and almost perfect for radiation therapy.

Conclusion

ChatGPT (GPT-4) was able to pass certification style exams for radiation technologists and therapists, but its performance varied between disciplines. The algorithm demonstrated substantial to almost perfect agreement in the responses it provided across multiple exam attempts. Future research evaluating ChatGPT's performance on standardized tests should consider using repeated measures.

ChatGPT 在放射技师和治疗师执业资格考试中的表现。
背景:本研究旨在描述 ChatGPT(GPT-4)在加拿大医学放射技师协会(CAMRT)认证式考试中的熟练程度,并描述其在多次考试尝试中的表现:方法:ChatGPT 从 CAMRT 放射技术、磁共振 (MRI)、核医学和放射治疗等学科的模拟考试中抽取试题(每门 87-98 道题)。ChatGPT 对每种考试都尝试了五次。考试成绩采用描述性统计方法进行评估,按学科和问题类型(知识、应用、批判性思维)进行分层。Light's Kappa 用于评估各次考试答案的一致性:结果:以 65% 的及格分数为标准,ChatGPT 只通过了一次放射技术考试(20%),五次都通过了核磁共振(100%),三次通过了核医学(60%),五次都通过了放射治疗(100%)。除放射治疗外,ChatGPT 在所有学科的知识问题上表现最佳。在批判性思维问题上表现最差。在放射技术、核磁共振成像和核医学学科中,ChatGPT 的回答在不同尝试中的一致性都很高,而在放射治疗方面则几乎完美:结论:ChatGPT(GPT-4)能够通过放射技师和治疗师的认证式考试,但其表现因学科而异。在多次考试尝试中,该算法提供的答题结果基本一致,甚至近乎完美。未来评估 ChatGPT 在标准化测试中表现的研究应考虑使用重复测量法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Imaging and Radiation Sciences
Journal of Medical Imaging and Radiation Sciences RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
2.30
自引率
11.10%
发文量
231
审稿时长
53 days
期刊介绍: Journal of Medical Imaging and Radiation Sciences is the official peer-reviewed journal of the Canadian Association of Medical Radiation Technologists. This journal is published four times a year and is circulated to approximately 11,000 medical radiation technologists, libraries and radiology departments throughout Canada, the United States and overseas. The Journal publishes articles on recent research, new technology and techniques, professional practices, technologists viewpoints as well as relevant book reviews.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信