Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations: Cross-Sectional Study.

IF 3.2 Q1 EDUCATION, SCIENTIFIC DISCIPLINES
Ming-Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou
{"title":"Impact of Prompt Engineering on the Performance of ChatGPT Variants Across Different Question Types in Medical Student Examinations: Cross-Sectional Study.","authors":"Ming-Yu Hsieh, Tzu-Ling Wang, Pen-Hua Su, Ming-Chih Chou","doi":"10.2196/78320","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Large language models such as ChatGPT (OpenAI) have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear.</p><p><strong>Objective: </strong>This study aims to systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1-mini, and GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations.</p><p><strong>Methods: </strong>A 100-item examination dataset covering multiple choice questions, short answer questions, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineering conditions over 5 independent runs. Student cohort scores (N=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics (v29.0) with paired t tests and Cohen d (P<.05).</p><p><strong>Results: </strong>Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1), and final scores ranged from 55% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (10.6%, P<.001) and GPT-4.0 (3.2%, P=.002) but yielded negligible gains for optimized variants (P=.07-.94). Optimized models matched or exceeded student performance on both exams.</p><p><strong>Conclusions: </strong>Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As large language models mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of artificial intelligence as a learning companion.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e78320"},"PeriodicalIF":3.2000,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12488032/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Medical Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/78320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Large language models such as ChatGPT (OpenAI) have shown promise in medical education assessments, but the comparative effects of prompt engineering across optimized variants and relative performance against medical students remain unclear.

Objective: This study aims to systematically evaluate the impact of prompt engineering on five ChatGPT variants (GPT-3.5, GPT-4.0, GPT-4o, GPT-4o1-mini, and GPT-4o1) and benchmark their performance against fourth-year medical students in midterm and final examinations.

Methods: A 100-item examination dataset covering multiple choice questions, short answer questions, clinical case analysis, and image-based questions was administered to each model under no-prompt and prompt-engineering conditions over 5 independent runs. Student cohort scores (N=143) were collected for comparison. Responses were scored using standardized rubrics, converted to percentages, and analyzed in SPSS Statistics (v29.0) with paired t tests and Cohen d (P<.05).

Results: Baseline midterm scores ranged from 59.2% (GPT-3.5) to 94.1% (GPT-4o1), and final scores ranged from 55% to 92.4%. Fourth-year students averaged 89.4% (midterm) and 80.2% (final). Prompt engineering significantly improved GPT-3.5 (10.6%, P<.001) and GPT-4.0 (3.2%, P=.002) but yielded negligible gains for optimized variants (P=.07-.94). Optimized models matched or exceeded student performance on both exams.

Conclusions: Prompt engineering enhances early-generation model performance, whereas advanced variants inherently achieve near-ceiling accuracy, surpassing medical students. As large language models mature, emphasis should shift from prompt design to model selection, multimodal integration, and critical use of artificial intelligence as a learning companion.

提示工程对医学生考试中不同题型ChatGPT变体表现的影响:横断面研究
背景:像ChatGPT (OpenAI)这样的大型语言模型在医学教育评估中已经显示出前景,但是跨优化变体的提示工程的比较效果和对医学生的相对表现仍然不清楚。目的:本研究旨在系统评估提示工程对五个ChatGPT变体(GPT-3.5、GPT-4.0、gpt - 40、gpt - 401 -mini和gpt - 401)的影响,并将其与四年级医学生在期中和期末考试中的表现进行比较。方法:在5次独立运行中,对每个模型进行无提示和提示工程条件下的100题考试数据集,包括多项选择题、简答题、临床病例分析和基于图像的问题。收集学生队列评分(N=143)进行比较。采用标准化标准评分,转换成百分比,并在SPSS Statistics (v29.0)中使用配对t检验和Cohen d进行分析(结果:中期基线得分范围为59.2% (GPT-3.5)至94.1% (gpt - 4.1),最终得分范围为55%至92.4%。四年级学生平均89.4%(期中)和80.2%(期末)。提示工程显著提高了GPT-3.5(10.6%),结论:提示工程提高了早期模型的性能,而高级变体固有地达到了接近上限的准确性,超过了医学生。随着大型语言模型的成熟,重点应该从提示设计转向模型选择、多模态集成以及人工智能作为学习伙伴的关键使用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Medical Education
JMIR Medical Education Social Sciences-Education
CiteScore
6.90
自引率
5.60%
发文量
54
审稿时长
8 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信