Evaluating the Performance of ChatGPT on Board-Style Examination Questions in Ophthalmology: A Meta-Analysis.

IF 5.7 3区 医学 Q1 HEALTH CARE SCIENCES & SERVICES
Jiawen Wei, Xiaoyan Wang, Mingxue Huang, Yanwu Xu, Weihua Yang
{"title":"Evaluating the Performance of ChatGPT on Board-Style Examination Questions in Ophthalmology: A Meta-Analysis.","authors":"Jiawen Wei, Xiaoyan Wang, Mingxue Huang, Yanwu Xu, Weihua Yang","doi":"10.1007/s10916-025-02227-7","DOIUrl":null,"url":null,"abstract":"<p><p>To review empirical research on ChatGPT's accuracy in answering ophthalmology board-style examination questions up to March 2025 and to analyze the effects of GPT versions, question types, language differences, and ophthalmology topics on accuracy. A search was conducted in PubMed, Web of Science, Embase, Scopus, and the Cochrane Library in March 2025. Two authors extracted data and independently assessed study quality. Accuracy rates were calculated with Stata 17.0. GPT-4 had an integrated accuracy of 73%, higher than GPT-3.5's 54%. It scored 77% in text and 55% in image tasks. GPT-4's accuracy was 73% in English-speaking countries and 71% in non-English ones. In ophthalmology, General Medicine achieved the highest accuracy (80%), while Clinical Optics had the lowest performance (55%). GPT-4 outperforms GPT-3.5, but its image processing capability needs further validation. Performance varies by language and topic, suggesting the need for more research on cross-linguistic efficacy and error analysis.</p>","PeriodicalId":16338,"journal":{"name":"Journal of Medical Systems","volume":"49 1","pages":"94"},"PeriodicalIF":5.7000,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Systems","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1007/s10916-025-02227-7","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

To review empirical research on ChatGPT's accuracy in answering ophthalmology board-style examination questions up to March 2025 and to analyze the effects of GPT versions, question types, language differences, and ophthalmology topics on accuracy. A search was conducted in PubMed, Web of Science, Embase, Scopus, and the Cochrane Library in March 2025. Two authors extracted data and independently assessed study quality. Accuracy rates were calculated with Stata 17.0. GPT-4 had an integrated accuracy of 73%, higher than GPT-3.5's 54%. It scored 77% in text and 55% in image tasks. GPT-4's accuracy was 73% in English-speaking countries and 71% in non-English ones. In ophthalmology, General Medicine achieved the highest accuracy (80%), while Clinical Optics had the lowest performance (55%). GPT-4 outperforms GPT-3.5, but its image processing capability needs further validation. Performance varies by language and topic, suggesting the need for more research on cross-linguistic efficacy and error analysis.

评价ChatGPT在眼科考题中的表现:一项荟萃分析。
回顾截至2025年3月ChatGPT答题准确率的实证研究,分析GPT版本、题型、语言差异和眼科主题对准确率的影响。我们于2025年3月在PubMed、Web of Science、Embase、Scopus和Cochrane Library进行了检索。两位作者提取数据并独立评估研究质量。使用Stata 17.0计算准确率。GPT-4的综合准确率为73%,高于GPT-3.5的54%。它在文字任务中得分77%,在图像任务中得分55%。GPT-4在英语国家的准确率为73%,在非英语国家的准确率为71%。在眼科方面,普通医学的准确率最高(80%),而临床光学的准确率最低(55%)。GPT-4优于GPT-3.5,但其图像处理能力有待进一步验证。表现因语言和话题而异,这表明需要对跨语言效果和错误分析进行更多的研究。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Systems
Journal of Medical Systems 医学-卫生保健
CiteScore
11.60
自引率
1.90%
发文量
83
审稿时长
4.8 months
期刊介绍: Journal of Medical Systems provides a forum for the presentation and discussion of the increasingly extensive applications of new systems techniques and methods in hospital clinic and physician''s office administration; pathology radiology and pharmaceutical delivery systems; medical records storage and retrieval; and ancillary patient-support systems. The journal publishes informative articles essays and studies across the entire scale of medical systems from large hospital programs to novel small-scale medical services. Education is an integral part of this amalgamation of sciences and selected articles are published in this area. Since existing medical systems are constantly being modified to fit particular circumstances and to solve specific problems the journal includes a special section devoted to status reports on current installations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信