Jiaxi Pu, Jie Hong, Qiao Yu, Pan Yu, Jiaqi Tian, Yuehua He, Hanwei Huang, Qiongjing Yuan, Lijian Tao, Zhangzhe Peng
{"title":"Accuracy, satisfaction, and impact of custom GPT in acquiring clinical knowledge: Potential for AI-assisted medical education.","authors":"Jiaxi Pu, Jie Hong, Qiao Yu, Pan Yu, Jiaqi Tian, Yuehua He, Hanwei Huang, Qiongjing Yuan, Lijian Tao, Zhangzhe Peng","doi":"10.1080/0142159X.2025.2458808","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Recent advancements in artificial intelligence (AI) have enabled the customization of large language models to address specific domains such as medical education. This study investigates the practical performance of a custom GPT model in enhancing clinical knowledge acquisition for medical students and physicians.</p><p><strong>Methods: </strong>A custom GPT was developed by incorporating the latest readily available teaching resources. Its accuracy in providing clinical knowledge was evaluated using a set of clinical questions, and responses were compared against established medical guidelines. Satisfaction was assessed through surveys involving medical students and physicians at different stages and from various types of hospitals. The impact of the custom GPT was further evaluated by comparing its role in facilitating clinical knowledge acquisition with traditional learning methods.</p><p><strong>Results: </strong>The custom GPT demonstrated higher accuracy (83.6%) compared to general AI models (65.5%, 69.1%) and was comparable to a professionally developed AI (Glass Health, 83.6%). Residents reported the highest satisfaction compared to clerks and physicians, citing improved learning independence, motivation, and confidence (<i>p</i> < 0.05). Physicians, especially those from teaching hospitals, showed greater eagerness to develop a custom GPT compared to clerks and residents (<i>p</i> < 0.05). The impact analysis revealed that residents using the custom GPT achieved better test scores compared to those using traditional resources (<i>p</i> < 0.05), though fewer perfect scores were obtained.</p><p><strong>Conclusions: </strong>The custom GPT demonstrates significant promise as an innovative tool for advancing medical education, particularly for residents. Its capability to deliver accurate, tailored information complements traditional teaching methods, aiding educators in promoting personalized and consistent training. However, it is essential for both learners and educators to remain critical in evaluating AI-generated information. With continued development and thoughtful integration, AI tools like custom GPTs have the potential to significantly enhance the quality and accessibility of medical education.[Box: see text].</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-7"},"PeriodicalIF":3.3000,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical Teacher","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/0142159X.2025.2458808","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0
Abstract
Background: Recent advancements in artificial intelligence (AI) have enabled the customization of large language models to address specific domains such as medical education. This study investigates the practical performance of a custom GPT model in enhancing clinical knowledge acquisition for medical students and physicians.
Methods: A custom GPT was developed by incorporating the latest readily available teaching resources. Its accuracy in providing clinical knowledge was evaluated using a set of clinical questions, and responses were compared against established medical guidelines. Satisfaction was assessed through surveys involving medical students and physicians at different stages and from various types of hospitals. The impact of the custom GPT was further evaluated by comparing its role in facilitating clinical knowledge acquisition with traditional learning methods.
Results: The custom GPT demonstrated higher accuracy (83.6%) compared to general AI models (65.5%, 69.1%) and was comparable to a professionally developed AI (Glass Health, 83.6%). Residents reported the highest satisfaction compared to clerks and physicians, citing improved learning independence, motivation, and confidence (p < 0.05). Physicians, especially those from teaching hospitals, showed greater eagerness to develop a custom GPT compared to clerks and residents (p < 0.05). The impact analysis revealed that residents using the custom GPT achieved better test scores compared to those using traditional resources (p < 0.05), though fewer perfect scores were obtained.
Conclusions: The custom GPT demonstrates significant promise as an innovative tool for advancing medical education, particularly for residents. Its capability to deliver accurate, tailored information complements traditional teaching methods, aiding educators in promoting personalized and consistent training. However, it is essential for both learners and educators to remain critical in evaluating AI-generated information. With continued development and thoughtful integration, AI tools like custom GPTs have the potential to significantly enhance the quality and accessibility of medical education.[Box: see text].
期刊介绍:
Medical Teacher provides accounts of new teaching methods, guidance on structuring courses and assessing achievement, and serves as a forum for communication between medical teachers and those involved in general education. In particular, the journal recognizes the problems teachers have in keeping up-to-date with the developments in educational methods that lead to more effective teaching and learning at a time when the content of the curriculum—from medical procedures to policy changes in health care provision—is also changing. The journal features reports of innovation and research in medical education, case studies, survey articles, practical guidelines, reviews of current literature and book reviews. All articles are peer reviewed.