Accuracy, satisfaction, and impact of custom GPT in acquiring clinical knowledge: Potential for AI-assisted medical education.

IF 3.3 2区 教育学 Q1 EDUCATION, SCIENTIFIC DISCIPLINES
Medical Teacher Pub Date : 2025-09-01 Epub Date: 2025-02-02 DOI:10.1080/0142159X.2025.2458808
Jiaxi Pu, Jie Hong, Qiao Yu, Pan Yu, Jiaqi Tian, Yuehua He, Hanwei Huang, Qiongjing Yuan, Lijian Tao, Zhangzhe Peng
{"title":"Accuracy, satisfaction, and impact of custom GPT in acquiring clinical knowledge: Potential for AI-assisted medical education.","authors":"Jiaxi Pu, Jie Hong, Qiao Yu, Pan Yu, Jiaqi Tian, Yuehua He, Hanwei Huang, Qiongjing Yuan, Lijian Tao, Zhangzhe Peng","doi":"10.1080/0142159X.2025.2458808","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Recent advancements in artificial intelligence (AI) have enabled the customization of large language models to address specific domains such as medical education. This study investigates the practical performance of a custom GPT model in enhancing clinical knowledge acquisition for medical students and physicians.</p><p><strong>Methods: </strong>A custom GPT was developed by incorporating the latest readily available teaching resources. Its accuracy in providing clinical knowledge was evaluated using a set of clinical questions, and responses were compared against established medical guidelines. Satisfaction was assessed through surveys involving medical students and physicians at different stages and from various types of hospitals. The impact of the custom GPT was further evaluated by comparing its role in facilitating clinical knowledge acquisition with traditional learning methods.</p><p><strong>Results: </strong>The custom GPT demonstrated higher accuracy (83.6%) compared to general AI models (65.5%, 69.1%) and was comparable to a professionally developed AI (Glass Health, 83.6%). Residents reported the highest satisfaction compared to clerks and physicians, citing improved learning independence, motivation, and confidence (<i>p</i> < 0.05). Physicians, especially those from teaching hospitals, showed greater eagerness to develop a custom GPT compared to clerks and residents (<i>p</i> < 0.05). The impact analysis revealed that residents using the custom GPT achieved better test scores compared to those using traditional resources (<i>p</i> < 0.05), though fewer perfect scores were obtained.</p><p><strong>Conclusions: </strong>The custom GPT demonstrates significant promise as an innovative tool for advancing medical education, particularly for residents. Its capability to deliver accurate, tailored information complements traditional teaching methods, aiding educators in promoting personalized and consistent training. However, it is essential for both learners and educators to remain critical in evaluating AI-generated information. With continued development and thoughtful integration, AI tools like custom GPTs have the potential to significantly enhance the quality and accessibility of medical education.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1502-1508"},"PeriodicalIF":3.3000,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Teacher","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/0142159X.2025.2458808","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/2 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Recent advancements in artificial intelligence (AI) have enabled the customization of large language models to address specific domains such as medical education. This study investigates the practical performance of a custom GPT model in enhancing clinical knowledge acquisition for medical students and physicians.

Methods: A custom GPT was developed by incorporating the latest readily available teaching resources. Its accuracy in providing clinical knowledge was evaluated using a set of clinical questions, and responses were compared against established medical guidelines. Satisfaction was assessed through surveys involving medical students and physicians at different stages and from various types of hospitals. The impact of the custom GPT was further evaluated by comparing its role in facilitating clinical knowledge acquisition with traditional learning methods.

Results: The custom GPT demonstrated higher accuracy (83.6%) compared to general AI models (65.5%, 69.1%) and was comparable to a professionally developed AI (Glass Health, 83.6%). Residents reported the highest satisfaction compared to clerks and physicians, citing improved learning independence, motivation, and confidence (p < 0.05). Physicians, especially those from teaching hospitals, showed greater eagerness to develop a custom GPT compared to clerks and residents (p < 0.05). The impact analysis revealed that residents using the custom GPT achieved better test scores compared to those using traditional resources (p < 0.05), though fewer perfect scores were obtained.

Conclusions: The custom GPT demonstrates significant promise as an innovative tool for advancing medical education, particularly for residents. Its capability to deliver accurate, tailored information complements traditional teaching methods, aiding educators in promoting personalized and consistent training. However, it is essential for both learners and educators to remain critical in evaluating AI-generated information. With continued development and thoughtful integration, AI tools like custom GPTs have the potential to significantly enhance the quality and accessibility of medical education.

准确性、满意度和定制GPT在获取临床知识方面的影响:人工智能辅助医学教育的潜力
背景:人工智能(AI)的最新进展使大型语言模型能够定制,以解决诸如医学教育等特定领域。本研究探讨了自定义GPT模型在促进医学生和医生临床知识获取方面的实际表现。方法:结合最新的现成教学资源,开发定制的GPT。它在提供临床知识的准确性进行了评估,使用一组临床问题,并与既定的医疗指南的反应进行了比较。通过对不同阶段和不同类型医院的医学生和医生的调查来评估满意度。通过比较定制GPT与传统学习方法在促进临床知识获取方面的作用,进一步评估定制GPT的影响。结果:与一般人工智能模型(65.5%,69.1%)相比,定制GPT具有更高的准确性(83.6%),与专业开发的人工智能模型(Glass Health, 83.6%)相当。与文员和医生相比,住院医生的满意度最高,他们提到了学习独立性、动机和信心的提高(p p p结论:定制GPT作为推进医学教育的创新工具,特别是对住院医生来说,具有重要的前景。它能够提供准确、量身定制的信息,补充传统的教学方法,帮助教育工作者促进个性化和一致的培训。然而,对于学习者和教育者来说,在评估人工智能生成的信息时保持批判性至关重要。随着不断的发展和周到的整合,像定制gpt这样的人工智能工具有可能显著提高医学教育的质量和可及性。[方框:见文本]。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical Teacher
Medical Teacher 医学-卫生保健
CiteScore
7.80
自引率
8.50%
发文量
396
审稿时长
3-6 weeks
期刊介绍: Medical Teacher provides accounts of new teaching methods, guidance on structuring courses and assessing achievement, and serves as a forum for communication between medical teachers and those involved in general education. In particular, the journal recognizes the problems teachers have in keeping up-to-date with the developments in educational methods that lead to more effective teaching and learning at a time when the content of the curriculum—from medical procedures to policy changes in health care provision—is also changing. The journal features reports of innovation and research in medical education, case studies, survey articles, practical guidelines, reviews of current literature and book reviews. All articles are peer reviewed.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信