Comparing closed and open large language models on pediatric cardiology board exam performance.

IF 0.7 Q4 CARDIAC & CARDIOVASCULAR SYSTEMS
Annals of Pediatric Cardiology Pub Date : 2025-11-01 Epub Date: 2026-03-16 DOI:10.4103/apc.apc_301_25
Nino Nikolovski, Conall T Morgan, Michael N Gritti
{"title":"Comparing closed and open large language models on pediatric cardiology board exam performance.","authors":"Nino Nikolovski, Conall T Morgan, Michael N Gritti","doi":"10.4103/apc.apc_301_25","DOIUrl":null,"url":null,"abstract":"<p><p>Large language models (LLMs) have gained traction in medicine, but there is limited research comparing closed- and open-source models in subspecialty contexts. This study evaluated ChatGPT-4.0o and DeepSeek-R1 on a pediatric cardiology board-style examination to quantify their accuracy and discuss educational and clinical utility. ChatGPT-4.0o and DeepSeek-R1 were used to answer 88 text-based multiple choice questions across 11 pediatric cardiology subtopics from a Pediatric Cardiology Board Review textbook. DeepSeek-R1's processing time per question was measured. ChatGPT-4.0o and DeepSeek-R1 achieved 70% (62/88) and 68% (60/88) accuracy, respectively (<i>p</i> = 0.53). Subtopic accuracy was equal in 5 of 11 chapters, with each model outperforming its counterpart in 3 of 11. DeepSeek-R1's processing time negatively correlated with accuracy (<i>r</i> = -0.68, <i>p</i> = 0.02). ChatGPT-4.0o and DeepSeek-R1 were comparable in accuracy and approached the passing threshold on a pediatric cardiology board examination. While further development of LLMs is required for clinical integration into pediatric cardiology, these findings suggest the potential utility of these models as educational aids.</p>","PeriodicalId":8026,"journal":{"name":"Annals of Pediatric Cardiology","volume":"18 6","pages":"590-593"},"PeriodicalIF":0.7000,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13048703/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Pediatric Cardiology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4103/apc.apc_301_25","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/3/16 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"CARDIAC & CARDIOVASCULAR SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Large language models (LLMs) have gained traction in medicine, but there is limited research comparing closed- and open-source models in subspecialty contexts. This study evaluated ChatGPT-4.0o and DeepSeek-R1 on a pediatric cardiology board-style examination to quantify their accuracy and discuss educational and clinical utility. ChatGPT-4.0o and DeepSeek-R1 were used to answer 88 text-based multiple choice questions across 11 pediatric cardiology subtopics from a Pediatric Cardiology Board Review textbook. DeepSeek-R1's processing time per question was measured. ChatGPT-4.0o and DeepSeek-R1 achieved 70% (62/88) and 68% (60/88) accuracy, respectively (p = 0.53). Subtopic accuracy was equal in 5 of 11 chapters, with each model outperforming its counterpart in 3 of 11. DeepSeek-R1's processing time negatively correlated with accuracy (r = -0.68, p = 0.02). ChatGPT-4.0o and DeepSeek-R1 were comparable in accuracy and approached the passing threshold on a pediatric cardiology board examination. While further development of LLMs is required for clinical integration into pediatric cardiology, these findings suggest the potential utility of these models as educational aids.

比较封闭和开放的大语言模型在儿科心脏科考试中的表现。
大型语言模型(llm)已经在医学中获得了牵引力,但是在亚专业环境中比较封闭和开源模型的研究有限。本研究评估了chatgpt -4.0 0和DeepSeek-R1在儿科心脏病学委员会式检查中的准确性,并讨论了教育和临床应用。chatgpt -4.0和DeepSeek-R1用于回答88个基于文本的多项选择题,涉及11个儿科心脏病学子主题,来自儿科心脏病学委员会评论教科书。测量了DeepSeek-R1每个问题的处理时间。chatgpt - 4.00和DeepSeek-R1分别达到70%(62/88)和68%(60/88)的准确率(p = 0.53)。在11个章节中,有5个章节的子主题准确性是相等的,每个模型在11个章节中有3个章节的子主题准确性优于其对应章节。DeepSeek-R1的处理时间与精度呈负相关(r = -0.68, p = 0.02)。chatgpt -4.0和DeepSeek-R1在准确性上相当,并且在儿科心脏病学委员会检查中接近通过阈值。虽然法学硕士的进一步发展需要临床整合到儿科心脏病学中,但这些发现表明这些模型作为教育辅助工具的潜在效用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Annals of Pediatric Cardiology
Annals of Pediatric Cardiology CARDIAC & CARDIOVASCULAR SYSTEMS-
CiteScore
1.40
自引率
14.30%
发文量
51
审稿时长
23 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书