基于法学硕士的usmle式问题生成与ASPET/AMSPC知识目标:所有的穷而没有富。

IF 3.1 3区 医学 Q2 PHARMACOLOGY & PHARMACY
Thomas Thesen, Rupa Lalchandani Tuan, Joe Blumer, Michael W Lee
{"title":"基于法学硕士的usmle式问题生成与ASPET/AMSPC知识目标:所有的穷而没有富。","authors":"Thomas Thesen, Rupa Lalchandani Tuan, Joe Blumer, Michael W Lee","doi":"10.1002/bcp.70119","DOIUrl":null,"url":null,"abstract":"<p><p>Developing high-quality pharmacology multiple-choice questions (MCQs) is challenging in large part due to continually evolving therapeutic guidelines and the complex integration of basic science and clinical medicine in this subject area. Large language models (LLMs) like ChatGPT-4 have repeatedly demonstrated proficiency in answering medical licensing exam questions, prompting interest in their use for generating high stakes exam-style questions. This study evaluates the performance of ChatGPT-4o in generating USMLE-style pharmacology questions based on American Society for Pharmacology and Experimental Therapeutics/Association of Medical School Pharmacology Chairs (ASPET/AMSPC) knowledge objectives and assesses the impact of retrieval-augmented generation (RAG) on question accuracy and quality. Using standardized prompts, 50 questions (25 RAG and 25 non-RAG) were generated and subsequently evaluated by expert reviewers. Results showed higher accuracy for non-RAG questions (88.0% vs. 69.2%), though the difference was not statistically significant. No significant differences were observed in other quality dimensions. These findings suggest that sophisticated LLMs can generate high-quality pharmacology questions efficiently without RAG, though human oversight remains crucial.</p>","PeriodicalId":9251,"journal":{"name":"British journal of clinical pharmacology","volume":" ","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"LLM-based generation of USMLE-style questions with ASPET/AMSPC knowledge objectives: All RAGs and no riches.\",\"authors\":\"Thomas Thesen, Rupa Lalchandani Tuan, Joe Blumer, Michael W Lee\",\"doi\":\"10.1002/bcp.70119\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Developing high-quality pharmacology multiple-choice questions (MCQs) is challenging in large part due to continually evolving therapeutic guidelines and the complex integration of basic science and clinical medicine in this subject area. Large language models (LLMs) like ChatGPT-4 have repeatedly demonstrated proficiency in answering medical licensing exam questions, prompting interest in their use for generating high stakes exam-style questions. This study evaluates the performance of ChatGPT-4o in generating USMLE-style pharmacology questions based on American Society for Pharmacology and Experimental Therapeutics/Association of Medical School Pharmacology Chairs (ASPET/AMSPC) knowledge objectives and assesses the impact of retrieval-augmented generation (RAG) on question accuracy and quality. Using standardized prompts, 50 questions (25 RAG and 25 non-RAG) were generated and subsequently evaluated by expert reviewers. Results showed higher accuracy for non-RAG questions (88.0% vs. 69.2%), though the difference was not statistically significant. No significant differences were observed in other quality dimensions. These findings suggest that sophisticated LLMs can generate high-quality pharmacology questions efficiently without RAG, though human oversight remains crucial.</p>\",\"PeriodicalId\":9251,\"journal\":{\"name\":\"British journal of clinical pharmacology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2025-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British journal of clinical pharmacology\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1002/bcp.70119\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PHARMACOLOGY & PHARMACY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British journal of clinical pharmacology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/bcp.70119","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
引用次数: 0

摘要

开发高质量的药理学多项选择题(mcq)在很大程度上是具有挑战性的,这在很大程度上是由于该学科领域不断发展的治疗指南以及基础科学和临床医学的复杂整合。像ChatGPT-4这样的大型语言模型(llm)在回答医疗执照考试问题方面一再表现出熟练程度,促使人们对使用它们来生成高风险考试式问题产生兴趣。本研究评估了chatgpt - 40基于美国药理学和实验治疗学会/医学院药理学主席协会(ASPET/AMSPC)知识目标生成usmle风格药理学问题的性能,并评估了检索增强生成(RAG)对问题准确性和质量的影响。使用标准化提示,生成了50个问题(25个RAG和25个非RAG),随后由专家评审人员进行评估。结果显示,非rag问题的准确率更高(88.0%对69.2%),但差异无统计学意义。在其他质量维度上没有观察到显著差异。这些发现表明,复杂的法学硕士可以在没有RAG的情况下有效地生成高质量的药理学问题,尽管人类的监督仍然至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
LLM-based generation of USMLE-style questions with ASPET/AMSPC knowledge objectives: All RAGs and no riches.

Developing high-quality pharmacology multiple-choice questions (MCQs) is challenging in large part due to continually evolving therapeutic guidelines and the complex integration of basic science and clinical medicine in this subject area. Large language models (LLMs) like ChatGPT-4 have repeatedly demonstrated proficiency in answering medical licensing exam questions, prompting interest in their use for generating high stakes exam-style questions. This study evaluates the performance of ChatGPT-4o in generating USMLE-style pharmacology questions based on American Society for Pharmacology and Experimental Therapeutics/Association of Medical School Pharmacology Chairs (ASPET/AMSPC) knowledge objectives and assesses the impact of retrieval-augmented generation (RAG) on question accuracy and quality. Using standardized prompts, 50 questions (25 RAG and 25 non-RAG) were generated and subsequently evaluated by expert reviewers. Results showed higher accuracy for non-RAG questions (88.0% vs. 69.2%), though the difference was not statistically significant. No significant differences were observed in other quality dimensions. These findings suggest that sophisticated LLMs can generate high-quality pharmacology questions efficiently without RAG, though human oversight remains crucial.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
6.30
自引率
8.80%
发文量
419
审稿时长
1 months
期刊介绍: Published on behalf of the British Pharmacological Society, the British Journal of Clinical Pharmacology features papers and reports on all aspects of drug action in humans: review articles, mini review articles, original papers, commentaries, editorials and letters. The Journal enjoys a wide readership, bridging the gap between the medical profession, clinical research and the pharmaceutical industry. It also publishes research on new methods, new drugs and new approaches to treatment. The Journal is recognised as one of the leading publications in its field. It is online only, publishes open access research through its OnlineOpen programme and is published monthly.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信