Health data science Pub Date : 2025-04-01 eCollection Date: 2025-01-01 DOI:10.34133/hds.0250
Yu Hou, Jay Patel, Liya Dai, Emily Zhang, Yang Liu, Zaifu Zhan, Pooja Gangwani, Rui Zhang
{"title":"Benchmarking of Large Language Models for the Dental Admission Test.","authors":"Yu Hou, Jay Patel, Liya Dai, Emily Zhang, Yang Liu, Zaifu Zhan, Pooja Gangwani, Rui Zhang","doi":"10.34133/hds.0250","DOIUrl":null,"url":null,"abstract":"<p><p><b>Background:</b> Large language models (LLMs) have shown promise in educational applications, but their performance on high-stakes admissions tests, such as the Dental Admission Test (DAT), remains unclear. Understanding the capabilities and limitations of these models is critical for determining their suitability in test preparation. <b>Methods:</b> This study evaluated the ability of 16 LLMs, including general-purpose models (e.g., GPT-3.5, GPT-4, GPT-4o, GPT-o1, Google's Bard, mistral-large, and Claude), domain-specific fine-tuned models (e.g., DentalGPT, MedGPT, and BioGPT), and open-source models (e.g., Llama2-7B, Llama2-13B, Llama2-70B, Llama3-8B, and Llama3-70B), to answer questions from a sample DAT. Quantitative analysis was performed to assess model accuracy in different sections, and qualitative thematic analysis by subject matter experts examined specific challenges encountered by the models. <b>Results:</b> GPT-4o and GPT-o1 outperformed others in text-based questions assessing knowledge and comprehension, with GPT-o1 achieving perfect scores in the natural sciences (NS) and reading comprehension (RC) sections. Open-source models such as Llama3-70B also performed competitively in RC tasks. However, all models, including GPT-4o, struggled substantially with perceptual ability (PA) items, highlighting a persistent limitation in handling image-based tasks requiring visual-spatial reasoning. Fine-tuned medical models (e.g., DentalGPT, MedGPT, and BioGPT) demonstrated moderate success in text-based tasks but underperformed in areas requiring critical thinking and reasoning. Thematic analysis identified key challenges, including difficulties with stepwise problem-solving, transferring knowledge, comprehending intricate questions, and hallucinations, particularly on advanced items. <b>Conclusions:</b> While LLMs show potential for reinforcing factual knowledge and supporting learners, their limitations in handling higher-order cognitive tasks and image-based reasoning underscore the need for judicious integration with instructor-led guidance and targeted practice. This study provides valuable insights into the capabilities and limitations of current LLMs in preparing prospective dental students and highlights pathways for future innovations to improve performance across all cognitive skills assessed by the DAT.</p>","PeriodicalId":73207,"journal":{"name":"Health data science","volume":"5 ","pages":"0250"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11961047/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Health data science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.34133/hds.0250","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

背景:大语言模型(LLMs)在教育应用中已显示出良好的前景,但它们在牙科入学考试(DAT)等高风险入学考试中的表现仍不明确。了解这些模型的能力和局限性对于确定它们是否适合备考至关重要。方法:本研究评估了 16 种 LLM 的能力,其中包括通用模型(如 GPT-3.5、GPT-4、GPT-4o、GPT-o1、Google's Bard、mistral-large 和 Claude)、特定领域微调模型(如 DentalGPT、MedGPT、MedGPT、MedGPT-2、MedGPT-3、MedGPT-4、MedGPT-4o 和 Claude)、和 BioGPT)以及开源模型(如 Llama2-7B、Llama2-13B、Llama2-70B、Llama3-8B 和 Llama3-70B),以回答样本 DAT 中的问题。我们进行了定量分析,以评估模型在不同部分的准确性,并由主题专家进行了定性专题分析,以检查模型遇到的具体挑战。结果:GPT-4o 和 GPT-o1 在评估知识和理解能力的基于文本的问题中表现优于其他模型,其中 GPT-o1 在自然科学(NS)和阅读理解(RC)部分获得满分。Llama3-70B 等开源模型在 RC 任务中的表现也很有竞争力。然而,包括 GPT-4o 在内的所有模型在感知能力(PA)项目上都表现不佳,这突出表明了在处理需要视觉空间推理的基于图像的任务时始终存在的局限性。经过微调的医学模型(如 DentalGPT、MedGPT 和 BioGPT)在基于文本的任务中取得了中等程度的成功,但在需要批判性思维和推理的领域表现不佳。专题分析确定了主要挑战,包括逐步解决问题、知识迁移、理解复杂问题和幻觉方面的困难,尤其是在高级项目中。结论:尽管 LLM 在强化事实知识和支持学习者方面显示出潜力,但其在处理高阶认知任务和基于图像的推理方面的局限性突出表明,有必要与教师指导和有针对性的练习进行明智的整合。这项研究为了解目前的LLM在培养未来的牙科学生方面的能力和局限性提供了宝贵的见解,并强调了未来创新的途径,以提高DAT评估的所有认知技能的表现。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Benchmarking of Large Language Models for the Dental Admission Test.

Background: Large language models (LLMs) have shown promise in educational applications, but their performance on high-stakes admissions tests, such as the Dental Admission Test (DAT), remains unclear. Understanding the capabilities and limitations of these models is critical for determining their suitability in test preparation. Methods: This study evaluated the ability of 16 LLMs, including general-purpose models (e.g., GPT-3.5, GPT-4, GPT-4o, GPT-o1, Google's Bard, mistral-large, and Claude), domain-specific fine-tuned models (e.g., DentalGPT, MedGPT, and BioGPT), and open-source models (e.g., Llama2-7B, Llama2-13B, Llama2-70B, Llama3-8B, and Llama3-70B), to answer questions from a sample DAT. Quantitative analysis was performed to assess model accuracy in different sections, and qualitative thematic analysis by subject matter experts examined specific challenges encountered by the models. Results: GPT-4o and GPT-o1 outperformed others in text-based questions assessing knowledge and comprehension, with GPT-o1 achieving perfect scores in the natural sciences (NS) and reading comprehension (RC) sections. Open-source models such as Llama3-70B also performed competitively in RC tasks. However, all models, including GPT-4o, struggled substantially with perceptual ability (PA) items, highlighting a persistent limitation in handling image-based tasks requiring visual-spatial reasoning. Fine-tuned medical models (e.g., DentalGPT, MedGPT, and BioGPT) demonstrated moderate success in text-based tasks but underperformed in areas requiring critical thinking and reasoning. Thematic analysis identified key challenges, including difficulties with stepwise problem-solving, transferring knowledge, comprehending intricate questions, and hallucinations, particularly on advanced items. Conclusions: While LLMs show potential for reinforcing factual knowledge and supporting learners, their limitations in handling higher-order cognitive tasks and image-based reasoning underscore the need for judicious integration with instructor-led guidance and targeted practice. This study provides valuable insights into the capabilities and limitations of current LLMs in preparing prospective dental students and highlights pathways for future innovations to improve performance across all cognitive skills assessed by the DAT.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
3.70
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信