Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.

IF 4.4 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Korean Journal of Radiology Pub Date : 2025-06-01 Epub Date: 2025-04-17 DOI:10.3348/kjr.2024.1096
Arum Choi, Hyun Gi Kim, Moon Hyung Choi, Shakthi Kumaran Ramasamy, Youme Kim, Seung Eun Jung
{"title":"Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.","authors":"Arum Choi, Hyun Gi Kim, Moon Hyung Choi, Shakthi Kumaran Ramasamy, Youme Kim, Seung Eun Jung","doi":"10.3348/kjr.2024.1096","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels.</p><p><strong>Materials and methods: </strong>A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined.</p><p><strong>Results: </strong>GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, <i>P</i> = 0.002) and text-only questions (77.9% vs. 69.0%, <i>P</i> = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, <i>P</i> = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all <i>P</i> ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all <i>P</i> ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all <i>P</i> ≥ 0.275).</p><p><strong>Conclusion: </strong>GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.</p>","PeriodicalId":17881,"journal":{"name":"Korean Journal of Radiology","volume":" ","pages":"524-531"},"PeriodicalIF":4.4000,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12123083/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Korean Journal of Radiology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3348/kjr.2024.1096","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/4/17 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels.

Materials and methods: A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined.

Results: GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, P = 0.002) and text-only questions (77.9% vs. 69.0%, P = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, P = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all P ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all P ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all P ≥ 0.275).

Conclusion: GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.

GPT-4 Turbo和gpt - 40在韩国放射学会在职考试中的表现。
目的:尽管大型语言模型在放射学培训方面具有潜力,但它们处理基于图像的放射学问题的能力仍然知之甚少。本研究旨在评估GPT-4 Turbo和gpt - 40在放射科住院医师检查中的表现,分析不同问题类型之间的差异,并将其结果与不同水平住院医师的结果进行比较。材料和方法:采用韩国放射学会在职考试中的776道选择题,分为韩国语原文和英语译文两部分。我们评估了GPT-4 Turbo (GPT-4 - Turbo -2024-04-09)和gpt- 40 (gpt- 40 -2024-11-20)在温度设置为零的情况下对这些问题的性能,并根据五次独立试验的多数投票确定准确性。我们使用问题类型(纯文本vs基于图像)分析了他们的结果,并将其与全国放射学居民的表现进行了基准测试。研究了输入语言(韩语或英语)对模型性能的影响。结果:gpt - 40在基于图像的问题(48.2%比41.8%,P = 0.002)和纯文本问题(77.9%比69.0%,P = 0.031)上都优于GPT-4 Turbo。在基于图像的问题上,GPT-4 Turbo和gpt - 40的表现与一年级住院医师相当(分别为41.8%和48.2%,P值分别为43.3%,P值分别为0.608和0.079),但低于二至四年级住院医师(分别为56.0%和63.9%,P值均≤0.005)。对于纯文本问题,GPT-4 Turbo和gpt - 40在所有年份的表现均优于居民(分别为69.0%和77.9%,vs. 44.7%-57.5%, P均≤0.039)。两种模型在英语和韩语版本问题上的表现均无显著差异(P均≥0.275)。结论:gpt - 40在所有题型上均优于GPT-4 Turbo。在基于图像的问题上,两种模型的表现都与一年级学生的表现相当,但低于高年级学生的表现。在回答纯文本问题时,两种模型都表现出了优异的表现。这些模型在英语和韩语输入中表现一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Korean Journal of Radiology
Korean Journal of Radiology 医学-核医学
CiteScore
10.60
自引率
12.50%
发文量
141
审稿时长
1.3 months
期刊介绍: The inaugural issue of the Korean J Radiol came out in March 2000. Our journal aims to produce and propagate knowledge on radiologic imaging and related sciences. A unique feature of the articles published in the Journal will be their reflection of global trends in radiology combined with an East-Asian perspective. Geographic differences in disease prevalence will be reflected in the contents of papers, and this will serve to enrich our body of knowledge. World''s outstanding radiologists from many countries are serving as editorial board of our journal.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信