Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations.

IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES
Ming Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou
{"title":"Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations.","authors":"Ming Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou","doi":"10.2196/78320","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) such as ChatGPT have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear.</p><p><strong>Objective: </strong>To systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1mini, GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations.</p><p><strong>Methods: </strong>A 100-item examination dataset covering multiple-choice, short-answer, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineered conditions over five independent runs. Student cohort scores (n=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics 29 with paired t-tests and Cohen's d (p<0.05).</p><p><strong>Results: </strong>Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1); final scores from 55.0% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (+10.6%, p<0.001) and GPT-4.0 (+3.2%, p=0.002) but yielded negligible gains for optimized variants (p=0.066-0.94). Optimized models matched or exceeded student performance on both exams.</p><p><strong>Conclusions: </strong>Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As LLMs mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of AI as a learning companion.</p><p><strong>Clinicaltrial: </strong></p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":" ","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2025-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Medical Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/78320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Large language models (LLMs) such as ChatGPT have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear.

Objective: To systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1mini, GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations.

Methods: A 100-item examination dataset covering multiple-choice, short-answer, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineered conditions over five independent runs. Student cohort scores (n=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics 29 with paired t-tests and Cohen's d (p<0.05).

Results: Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1); final scores from 55.0% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (+10.6%, p<0.001) and GPT-4.0 (+3.2%, p=0.002) but yielded negligible gains for optimized variants (p=0.066-0.94). Optimized models matched or exceeded student performance on both exams.

Conclusions: Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As LLMs mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of AI as a learning companion.

Clinicaltrial:

Abstract Image

提示工程对医学生考试中不同题型ChatGPT变体性能的影响
背景:大型语言模型(LLMs)如ChatGPT已经在医学教育评估中显示出前景,但是跨优化变体的提示工程的比较效果和对医学生的相对表现仍然不清楚。目的:系统评价提示工程对五种ChatGPT变体(GPT-3.5、GPT-4.0、gpt - 40、gpt - 401mini、gpt - 401)的影响,并与四年级医学生在期中和期末考试中的表现进行比较。方法:在五次独立运行中,对每个模型进行了无提示和即时设计条件下的100项考试数据集,包括多项选择、简答、临床病例分析和基于图像的问题。收集学生队列评分(n=143)进行比较。采用标准化标准评分,转换为百分比,并在SPSS统计29中使用配对t检验和Cohen's d进行分析(结果:基线中期得分范围为59.2% (GPT-3.5)至94.1% (gpt - 401);期末成绩从55.0%到92.4%。四年级学生平均89.4%(期中)和80.2%(期末)。结论:提示工程提高了早期模型的性能,而先进的变体固有地达到了接近上限的准确性,超过了医学生。随着法学硕士的成熟,重点应该从快速设计转向模型选择、多模式集成以及关键地使用人工智能作为学习伙伴。临床试验:
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Medical Education
JMIR Medical Education Social Sciences-Education
CiteScore
6.90
自引率
5.60%
发文量
54
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信