Alexandre O Gérard, Diane Merino, Marc Labriffe, Fanny Rocher, Delphine Viard, Laurence Zemori, Thibaud Lavrut, Erik M Donker, Joost D Piët, Jean-Paul Fournier, Milou-Daniel Drici, Alexandre Destere
{"title":"Evaluating and leveraging large language models in clinical pharmacology and therapeutics assessment: From exam takers to exam shapers.","authors":"Alexandre O Gérard, Diane Merino, Marc Labriffe, Fanny Rocher, Delphine Viard, Laurence Zemori, Thibaud Lavrut, Erik M Donker, Joost D Piët, Jean-Paul Fournier, Milou-Daniel Drici, Alexandre Destere","doi":"10.1002/bcp.70137","DOIUrl":null,"url":null,"abstract":"<p><strong>Aims: </strong>In medical education, the ability of large language models (LLMs) to match human performance raises questions about their potential as educational tools. This study evaluates LLMs' performance on Clinical Pharmacology and Therapeutics (CPT) exams, comparing their results to medical students and exploring their ability to identify poorly formulated multiple-choice questions (MCQs).</p><p><strong>Methods: </strong>ChatGPT-4 Omni, Gemini Advanced, Le Chat and DeepSeek R1 were tested on local CPT exams (third year of bachelor's degree, first/second year of master's degree) and the European Prescribing Exam (EuroPE<sup>+</sup>). The exams included MCQs and open-ended questions assessing knowledge and prescribing skills. LLM results were analysed using the same scoring system as students. A confusion matrix was used to evaluate the ability of ChatGPT and Gemini to identify ambiguous/erroneous MCQs.</p><p><strong>Results: </strong>LLMs achieved comparable or superior results to medical students across all levels. For local exams, LLMs outperformed M1 students and matched L3 and M2 students. In EuroPE<sup>+</sup>, LLMs significantly outperformed students in both the knowledge and prescribing skills sections. All LLM errors in EuroPE<sup>+</sup> were genuine (100%), whereas local exam errors were frequently due to ambiguities or correction flaws (24.3%). When both ChatGPT and Gemini provided the same incorrect answer to an MCQ, the specificity for detecting ambiguous questions was 92.9%, with a negative predictive value of 85.5%.</p><p><strong>Conclusion: </strong>LLMs demonstrate capabilities comparable to or exceeding medical students in CPT exams. Their ability to flag potentially flawed MCQs highlights their value not only as educational tools but also as quality control instruments in exam preparation.</p>","PeriodicalId":9251,"journal":{"name":"British journal of clinical pharmacology","volume":" ","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"British journal of clinical pharmacology","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/bcp.70137","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PHARMACOLOGY & PHARMACY","Score":null,"Total":0}
引用次数: 0
Abstract
Aims: In medical education, the ability of large language models (LLMs) to match human performance raises questions about their potential as educational tools. This study evaluates LLMs' performance on Clinical Pharmacology and Therapeutics (CPT) exams, comparing their results to medical students and exploring their ability to identify poorly formulated multiple-choice questions (MCQs).
Methods: ChatGPT-4 Omni, Gemini Advanced, Le Chat and DeepSeek R1 were tested on local CPT exams (third year of bachelor's degree, first/second year of master's degree) and the European Prescribing Exam (EuroPE+). The exams included MCQs and open-ended questions assessing knowledge and prescribing skills. LLM results were analysed using the same scoring system as students. A confusion matrix was used to evaluate the ability of ChatGPT and Gemini to identify ambiguous/erroneous MCQs.
Results: LLMs achieved comparable or superior results to medical students across all levels. For local exams, LLMs outperformed M1 students and matched L3 and M2 students. In EuroPE+, LLMs significantly outperformed students in both the knowledge and prescribing skills sections. All LLM errors in EuroPE+ were genuine (100%), whereas local exam errors were frequently due to ambiguities or correction flaws (24.3%). When both ChatGPT and Gemini provided the same incorrect answer to an MCQ, the specificity for detecting ambiguous questions was 92.9%, with a negative predictive value of 85.5%.
Conclusion: LLMs demonstrate capabilities comparable to or exceeding medical students in CPT exams. Their ability to flag potentially flawed MCQs highlights their value not only as educational tools but also as quality control instruments in exam preparation.
期刊介绍:
Published on behalf of the British Pharmacological Society, the British Journal of Clinical Pharmacology features papers and reports on all aspects of drug action in humans: review articles, mini review articles, original papers, commentaries, editorials and letters. The Journal enjoys a wide readership, bridging the gap between the medical profession, clinical research and the pharmaceutical industry. It also publishes research on new methods, new drugs and new approaches to treatment. The Journal is recognised as one of the leading publications in its field. It is online only, publishes open access research through its OnlineOpen programme and is published monthly.