Performance of large language models on family medicine licensing exams.

IF 2.4 4区 医学 Q1 MEDICINE, GENERAL & INTERNAL
Mahmud Omar, Kareem Hijazi, Mohammad Omar, Girish N Nadkarni, Eyal Klang
{"title":"Performance of large language models on family medicine licensing exams.","authors":"Mahmud Omar, Kareem Hijazi, Mohammad Omar, Girish N Nadkarni, Eyal Klang","doi":"10.1093/fampra/cmaf035","DOIUrl":null,"url":null,"abstract":"<p><strong>Background and aim: </strong>Large language models (LLMs) have shown promise in specialized medical exams but remain less explored in family medicine and primary care. This study evaluated eight state-of-the-art LLMs on the official Israeli primary care licensing exam, focusing on prompt design and explanation quality.</p><p><strong>Methods: </strong>Two hundred multiple-choice questions were tested using simple and few-shot Chain-of-Thought prompts (prompts that include examples which illustrate reasoning). Performance differences were assessed with Cochran's Q and pairwise McNemar tests. A stress test of the top performer (openAI's o1-preview) examined 30 selected questions, with two physicians scoring explanations for accuracy, logic, and hallucinations (extra or fabricated information not supported by the question).</p><p><strong>Results: </strong>Five models exceeded the 65% passing threshold under simple prompts; seven did so with few-shot prompts. o1-preview reached 85.5%. In the stress test, explanations were generally coherent and accurate, with 5 of 120 flagged for hallucinations. Inter-rater agreement on explanation scoring was high (weighted kappa 0.773; Intraclass Correlation Coefficient (ICC) 0.776).</p><p><strong>Conclusions: </strong>Most tested models performed well on an official family medicine exam, especially with structured prompts. Nonetheless, multiple-choice formats cannot address broader clinical competencies such as physical exams and patient rapport. Future efforts should refine these models to eliminate hallucinations, test for socio-demographic biases, and ensure alignment with real-world demands.</p>","PeriodicalId":12209,"journal":{"name":"Family practice","volume":"42 4","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Family practice","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/fampra/cmaf035","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
引用次数: 0

Abstract

Background and aim: Large language models (LLMs) have shown promise in specialized medical exams but remain less explored in family medicine and primary care. This study evaluated eight state-of-the-art LLMs on the official Israeli primary care licensing exam, focusing on prompt design and explanation quality.

Methods: Two hundred multiple-choice questions were tested using simple and few-shot Chain-of-Thought prompts (prompts that include examples which illustrate reasoning). Performance differences were assessed with Cochran's Q and pairwise McNemar tests. A stress test of the top performer (openAI's o1-preview) examined 30 selected questions, with two physicians scoring explanations for accuracy, logic, and hallucinations (extra or fabricated information not supported by the question).

Results: Five models exceeded the 65% passing threshold under simple prompts; seven did so with few-shot prompts. o1-preview reached 85.5%. In the stress test, explanations were generally coherent and accurate, with 5 of 120 flagged for hallucinations. Inter-rater agreement on explanation scoring was high (weighted kappa 0.773; Intraclass Correlation Coefficient (ICC) 0.776).

Conclusions: Most tested models performed well on an official family medicine exam, especially with structured prompts. Nonetheless, multiple-choice formats cannot address broader clinical competencies such as physical exams and patient rapport. Future efforts should refine these models to eliminate hallucinations, test for socio-demographic biases, and ensure alignment with real-world demands.

大型语言模型在家庭医学执照考试中的表现。
背景和目的:大型语言模型(LLMs)在专业医学考试中显示出前景,但在家庭医学和初级保健中仍未得到充分探索。本研究评估了以色列官方初级保健许可考试中8位最先进的法学硕士,重点关注提示设计和解释质量。方法:200个选择题使用简单的和少数镜头的思维链提示(提示包括说明推理的例子)进行测试。使用Cochran's Q和成对McNemar测试评估性能差异。对表现最好的人(openAI的01 -预览版)进行了压力测试,检查了30个选定的问题,由两名医生对准确性、逻辑性和幻觉(问题不支持的额外或捏造的信息)的解释进行评分。结果:5个模型在简单提示下超过65%的合格率;其中有7家公司是在很少的提示下完成的。o1预览达到85.5%。在压力测试中,解释总体上是连贯和准确的,120人中有5人被标记为幻觉。评价者对解释评分的一致性较高(加权kappa为0.773;类内相关系数(ICC) 0.776。结论:大多数测试模型在官方家庭医学考试中表现良好,特别是结构化提示。尽管如此,多项选择的形式不能解决更广泛的临床能力,如身体检查和患者关系。未来的努力应该完善这些模型,以消除幻觉,测试社会人口偏见,并确保与现实世界的需求保持一致。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Family practice
Family practice 医学-医学:内科
CiteScore
4.30
自引率
9.10%
发文量
144
审稿时长
4-8 weeks
期刊介绍: Family Practice is an international journal aimed at practitioners, teachers, and researchers in the fields of family medicine, general practice, and primary care in both developed and developing countries. Family Practice offers its readership an international view of the problems and preoccupations in the field, while providing a medium of instruction and exploration. The journal''s range and content covers such areas as health care delivery, epidemiology, public health, and clinical case studies. The journal aims to be interdisciplinary and contributions from other disciplines of medicine and social science are always welcomed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信