Joanna Burrell, Felicity Baker, Matthew Russell Bennion
{"title":"Resilience Training Web App for National Health Service Keyworkers: Pilot Usability Study.","authors":"Joanna Burrell, Felicity Baker, Matthew Russell Bennion","doi":"10.2196/51101","DOIUrl":"10.2196/51101","url":null,"abstract":"<p><strong>Background: </strong>It is well established that frontline health care staff are particularly at risk of stress. Resilience is important to help staff to manage daily challenges and to protect against burnout.</p><p><strong>Objective: </strong>This study aimed to assess the usability and user perceptions of a resilience training web app developed to support health care keyworkers in understanding their own stress response and to help them put into place strategies to manage stress and to build resilience.</p><p><strong>Methods: </strong>Nurses (n=7) and other keyworkers (n=1), the target users for the resilience training web app, participated in the usability evaluation. Participants completed a pretraining questionnaire capturing basic demographic information and then used the training before completing a posttraining feedback questionnaire exploring the impact and usability of the web app.</p><p><strong>Results: </strong>From a sample of 8 keyworkers, 6 (75%) rated their current role as \"sometimes\" stressful. All 8 (100%) keyworkers found the training easy to understand, and 5 of 7 (71%) agreed that the training increased their understanding of both stress and resilience. Further, 6 of 8 (75%) agreed that the resilience model had helped them to understand what resilience is. Many of the keyworkers (6/8, 75%) agreed that the content was relevant to them. Furthermore, 6 of 8 (75%) agreed that they were likely to act to develop their resilience following completion of the training.</p><p><strong>Conclusions: </strong>This study tested the usability of a web app for resilience training specifically targeting National Health Service keyworkers. This work preceded a larger scale usability study, and it is hoped this study will help guide other studies to develop similar programs in clinical settings.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e51101"},"PeriodicalIF":3.2,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11728195/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Leveraging Generative AI To Improve Motivation and Retrieval in Higher Education Learners.","authors":"Noahlana Monzon, Franklin Alan Hays","doi":"10.2196/59210","DOIUrl":"https://doi.org/10.2196/59210","url":null,"abstract":"<p><strong>Unstructured: </strong>Generative artificial intelligence (GAI) presents novel approaches to enhance motivation, curriculum structure and development, and learning and retrieval processes for both learners and instructors. Though a focus for this emerging technology is academic misconduct, we sought to leverage GAI in curriculum structure to facilitate educational outcomes. For instructors, GAI offers new opportunities in course design and management while reducing time requirements to evaluate outcomes and personalizing learner feedback. These include innovative instructional designs such as flipped classrooms and gamification, enriching teaching methodologies with focused and interactive approaches, and team-based exercise development, among others. For learners, GAI offers unprecedented self-directed learning opportunities, improved cognitive engagement, and effective retrieval practices, leading to enhanced autonomy, motivation, and knowledge retention. Though empowering, this evolving landscape has integration challenges and ethical considerations, including accuracy, technological evolution, loss of learner's voice, and socio-economic disparities. Our experience demonstrates that the responsible application of GAI's in educational settings will revolutionize learning practices, making education more accessible and tailored - producing positive motivational outcomes for both learners and instructors. Thus, we argue that leveraging GAI in educational settings will improve outcomes with implications extending from primary through higher and continuing education paradigms.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143256918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance of ChatGPT-4o on the Japanese Medical Licensing Examination: Evalution of Accuracy in Text-Only and Image-Based Questions.","authors":"Yuki Miyazaki, Masahiro Hata, Hisaki Omori, Atsuya Hirashima, Yuta Nakagawa, Mitsuhiro Eto, Shun Takahashi, Manabu Ikeda","doi":"10.2196/63129","DOIUrl":"10.2196/63129","url":null,"abstract":"<p><strong>Unlabelled: </strong>This study evaluated the performance of ChatGPT with GPT-4 Omni (GPT-4o) on the 118th Japanese Medical Licensing Examination. The study focused on both text-only and image-based questions. The model demonstrated a high level of accuracy overall, with no significant difference in performance between text-only and image-based questions. Common errors included clinical judgment mistakes and prioritization issues, underscoring the need for further improvement in the integration of artificial intelligence into medical education and practice.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e63129"},"PeriodicalIF":3.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11687171/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ellen Y Wang, Daniel Qian, Lijin Zhang, Brian S-K Li, Brian Ko, Michael Khoury, Meghana Renavikar, Avani Ganesan, Thomas J Caruso
{"title":"Acceptance of Virtual Reality in Trainees Using a Technology Acceptance Model: Survey Study.","authors":"Ellen Y Wang, Daniel Qian, Lijin Zhang, Brian S-K Li, Brian Ko, Michael Khoury, Meghana Renavikar, Avani Ganesan, Thomas J Caruso","doi":"10.2196/60767","DOIUrl":"10.2196/60767","url":null,"abstract":"<p><strong>Background: </strong>Virtual reality (VR) technologies have demonstrated therapeutic usefulness across a variety of health care settings. However, graduate medical education (GME) trainee perspectives on VR acceptability and usability are limited. The behavioral intentions of GME trainees with regard to VR as an anxiolytic tool have not been characterized through a theoretical framework of technology adoption.</p><p><strong>Objective: </strong>The primary aim of this study was to apply a hybrid Technology Acceptance Model (TAM) and a United Theory of Acceptance and Use of Technology (UTAUT) model to evaluate factors that predict the behavioral intentions of GME trainees to use VR for patient anxiolysis. The secondary aim was to assess the reliability of the TAM-UTAUT.</p><p><strong>Methods: </strong>Participants were surveyed in June 2023. GME trainees participated in a VR experience used to reduce perioperative anxiety. Participants then completed a survey evaluating demographics, perceptions, attitudes, environmental factors, and behavioral intentions that influence the adoption of new technologies.</p><p><strong>Results: </strong>In total, 202 of 1540 GME trainees participated. Only 198 participants were included in the final analysis (12.9% participation rate). Perceptions of usefulness, ease of use, and enjoyment; social influence; and facilitating conditions predicted intention to use VR. Age, past use, price willing to pay, and curiosity were less strong predictors of intention to use. All confirmatory factor analysis models demonstrated a good fit. All domain measurements demonstrated acceptable reliability.</p><p><strong>Conclusions: </strong>This TAM-UTAUT demonstrated validity and reliability for predicting the behavioral intentions of GME trainees to use VR as a therapeutic anxiolytic in clinical practice. Social influence and facilitating conditions are modifiable factors that present opportunities to advance VR adoption, such as fostering exposure to new technologies and offering relevant training and social encouragement. Future investigations should study the model's reliability within specialties in different geographic locations.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e60767"},"PeriodicalIF":3.2,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11693781/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abel Nicolau, Inês Jorge, Pedro Vieira-Marques, Carla Sa-Couto
{"title":"Influence of Training With Corrective Feedback Devices on Cardiopulmonary Resuscitation Skills Acquisition and Retention: Systematic Review and Meta-Analysis.","authors":"Abel Nicolau, Inês Jorge, Pedro Vieira-Marques, Carla Sa-Couto","doi":"10.2196/59720","DOIUrl":"10.2196/59720","url":null,"abstract":"<p><strong>Background: </strong>Several studies related to the use of corrective feedback devices in cardiopulmonary resuscitation training, with different populations, training methodologies, and equipment, present distinct results regarding the influence of this technology.</p><p><strong>Objective: </strong>This systematic review and meta-analysis aimed to examine the impact of corrective feedback devices in cardiopulmonary resuscitation skills acquisition and retention for laypeople and health care professionals. Training duration was also studied.</p><p><strong>Methods: </strong>The search was conducted in PubMed, Web of Science, and Scopus from January 2015 to December 2023. Eligible randomized controlled trials compared technology-based training incorporating corrective feedback with standard training. Outcomes of interest were the quality of chest compression-related components. The risk of bias was assessed using the Cochrane tool. A meta-analysis was used to explore the heterogeneity of the selected studies.</p><p><strong>Results: </strong>In total, 20 studies were included. Overall, it was reported that corrective feedback devices used during training had a positive impact on both skills acquisition and retention. Medium to high heterogeneity was observed.</p><p><strong>Conclusions: </strong>This systematic review and meta-analysis suggest that corrective feedback devices enhance skills acquisition and retention over time. Considering the medium to high heterogeneity observed, these findings should be interpreted with caution. More standardized, high-quality studies are needed.</p><p><strong>Trial registration: </strong>PROSPERO CRD42021240953; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=240953.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e59720"},"PeriodicalIF":3.2,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695954/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142855615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nimer Mehyar, Mohammed Awawdeh, Aamir Omair, Adi Aldawsari, Abdullah Alshudukhi, Ahmed Alzeer, Khaled Almutairi, Sultan Alsultan
{"title":"Long-Term Knowledge Retention of Biochemistry Among Medical Students in Riyadh, Saudi Arabia: Cross-Sectional Survey.","authors":"Nimer Mehyar, Mohammed Awawdeh, Aamir Omair, Adi Aldawsari, Abdullah Alshudukhi, Ahmed Alzeer, Khaled Almutairi, Sultan Alsultan","doi":"10.2196/56132","DOIUrl":"10.2196/56132","url":null,"abstract":"<p><strong>Background: </strong>Biochemistry is a cornerstone of medical education. Its knowledge is integral to the understanding of complex biological processes and how they are applied in several areas in health care. Also, its significance is reflected in the way it informs the practice of medicine, which can guide and help in both diagnosis and treatment. However, the retention of biochemistry knowledge over time remains a dilemma. Long-term retention of such crucial information is extremely important, as it forms the foundation upon which clinical skills are developed and refined. The effectiveness of biochemistry education, and consequently its long-term retention, is influenced by several factors. Educational methods play a critical role; interactional and integrative teaching approaches have been suggested to enhance retention compared with traditional didactic methods. The frequency and context in which biochemistry knowledge is applied in clinical settings can significantly impact its retention. Practical application reinforces theoretical understanding, making the knowledge more accessible in the long term. Prior knowledge (familiarity) of information suggests that it is stored in long-term memory, which makes its retention in the long term easier to recall.</p><p><strong>Objectives: </strong>This investigation was conducted at King Saud bin Abdulaziz University for Health Sciences in Riyadh, Saudi Arabia. The aim of the study is to understand the dynamics of long-term retention of biochemistry among medical students. Specifically, it looks for the association between students' familiarity with biochemistry content and actual knowledge retention levels.</p><p><strong>Methods: </strong>A cross-sectional correlational survey involving 240 students from King Saud bin Abdulaziz University for Health Sciences was conducted. Participants were recruited via nonprobability convenience sampling. A validated biochemistry assessment tool with 20 questions was used to gauge students' retention in biomolecules, catalysis, bioenergetics, and metabolism. To assess students' familiarity with the knowledge content of test questions, each question is accompanied by options that indicate students' prior knowledge of the content of the question. Statistical analyses tests such as Mann-Whitney U test, Kruskal-Wallis test, and chi-square tests were used.</p><p><strong>Results: </strong>Our findings revealed a significant correlation between students' familiarity of the content with their knowledge retention in the biomolecules (r=0.491; P<.001), catalysis (r=0.500; P<.001), bioenergetics (r=0.528; P<.001), and metabolism (r=0.564; P<.001) biochemistry knowledge domains.</p><p><strong>Conclusions: </strong>This study highlights the significance of familiarity (prior knowledge) in evaluating the retention of biochemistry knowledge. Although limited in terms of generalizability and inherent biases, the research highlights the crucial significance of student's","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e56132"},"PeriodicalIF":3.2,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11665479/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142829567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Topics and Trends of Health Informatics Education Research: Scientometric Analysis.","authors":"Qing Han","doi":"10.2196/58165","DOIUrl":"10.2196/58165","url":null,"abstract":"<p><strong>Background: </strong>Academic and educational institutions are making significant contributions toward training health informatics professionals. As research in health informatics education (HIE) continues to grow, it is useful to have a clearer understanding of this research field.</p><p><strong>Objective: </strong>This study aims to comprehensively explore the research topics and trends of HIE from 2014 to 2023. Specifically, it aims to explore (1) the trends of annual articles, (2) the prolific countries/regions, institutions, and publication sources, (3) the scientific collaborations of countries/regions and institutions, and (4) the major research themes and their developmental tendencies.</p><p><strong>Methods: </strong>Using publications in Web of Science Core Collection, a scientometric analysis of 575 articles related to the field of HIE was conducted. The structural topic model was used to identify topics discussed in the literature and to reveal the topic structure and evolutionary trends of HIE research.</p><p><strong>Results: </strong>Research interest in HIE has clearly increased from 2014 to 2023, and is continually expanding. The United States was found to be the most prolific country in this field. Harvard University was found to be the leading institution with the highest publication productivity. Journal of Medical Internet Research, Journal of The American Medical Informatics Association, and Applied Clinical Informatics were the top 3 journals with the highest articles in this field. Countries/regions and institutions having higher levels of international collaboration were more impactful. Research on HIE could be modeled into 7 topics related to the following areas: clinical (130/575, 22.6%), mobile application (123/575, 21.4%), consumer (99/575, 17.2%), teaching (61/575, 10.6%), public health (56/575, 9.7%), discipline (55/575, 9.6%), and nursing (51/575, 8.9%). The results clearly indicate the unique foci for each year, depicting the process of development for health informatics research.</p><p><strong>Conclusions: </strong>This is believed to be the first scientometric analysis exploring the research topics and trends in HIE. This study provides useful insights and implications, and the findings could be used as a guide for HIE contributors.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e58165"},"PeriodicalIF":3.2,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11669873/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fiatsogbe Dzuali, Kira Seiger, Roberto Novoa, Maria Aleshin, Joyce Teng, Jenna Lester, Roxana Daneshjou
{"title":"ChatGPT May Improve Access to Language-Concordant Care for Patients With Non-English Language Preferences.","authors":"Fiatsogbe Dzuali, Kira Seiger, Roberto Novoa, Maria Aleshin, Joyce Teng, Jenna Lester, Roxana Daneshjou","doi":"10.2196/51435","DOIUrl":"10.2196/51435","url":null,"abstract":"<p><strong>Unlabelled: </strong>This study evaluated the accuracy of ChatGPT in translating English patient education materials into Spanish, Mandarin, and Russian. While ChatGPT shows promise for translating Spanish and Russian medical information, Mandarin translations require further refinement, highlighting the need for careful review of AI-generated translations before clinical use.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e51435"},"PeriodicalIF":3.2,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11651640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142829563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of a Computer-Based Morphological Analysis Method for Free-Text Responses in the General Medicine In-Training Examination: Algorithm Validation Study.","authors":"Daiki Yokokawa, Kiyoshi Shikino, Yuji Nishizaki, Sho Fukui, Yasuharu Tokuda","doi":"10.2196/52068","DOIUrl":"10.2196/52068","url":null,"abstract":"<p><strong>Background: </strong>The General Medicine In-Training Examination (GM-ITE) tests clinical knowledge in a 2-year postgraduate residency program in Japan. In the academic year 2021, as a domain of medical safety, the GM-ITE included questions regarding the diagnosis from medical history and physical findings through video viewing and the skills in presenting a case. Examinees watched a video or audio recording of a patient examination and provided free-text responses. However, the human cost of scoring free-text answers may limit the implementation of GM-ITE. A simple morphological analysis and word-matching model, thus, can be used to score free-text responses.</p><p><strong>Objective: </strong>This study aimed to compare human versus computer scoring of free-text responses and qualitatively evaluate the discrepancies between human- and machine-generated scores to assess the efficacy of machine scoring.</p><p><strong>Methods: </strong>After obtaining consent for participation in the study, the authors used text data from residents who voluntarily answered the GM-ITE patient reproduction video-based questions involving simulated patients. The GM-ITE used video-based questions to simulate a patient's consultation in the emergency room with a diagnosis of pulmonary embolism following a fracture. Residents provided statements for the case presentation. We obtained human-generated scores by collating the results of 2 independent scorers and machine-generated scores by converting the free-text responses into a word sequence through segmentation and morphological analysis and matching them with a prepared list of correct answers in 2022.</p><p><strong>Results: </strong>Of the 104 responses collected-63 for postgraduate year 1 and 41 for postgraduate year 2-39 cases remained for final analysis after excluding invalid responses. The authors found discrepancies between human and machine scoring in 14 questions (7.2%); some were due to shortcomings in machine scoring that could be resolved by maintaining a list of correct words and dictionaries, whereas others were due to human error.</p><p><strong>Conclusions: </strong>Machine scoring is comparable to human scoring. It requires a simple program and calibration but can potentially reduce the cost of scoring free-text responses.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e52068"},"PeriodicalIF":3.2,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11637224/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance of GPT-3.5 and GPT-4 on the Korean Pharmacist Licensing Examination: Comparison Study.","authors":"Hye Kyung Jin, EunYoung Kim","doi":"10.2196/57451","DOIUrl":"10.2196/57451","url":null,"abstract":"<p><strong>Background: </strong>ChatGPT, a recently developed artificial intelligence chatbot and a notable large language model, has demonstrated improved performance on medical field examinations. However, there is currently little research on its efficacy in languages other than English or in pharmacy-related examinations.</p><p><strong>Objective: </strong>This study aimed to evaluate the performance of GPT models on the Korean Pharmacist Licensing Examination (KPLE).</p><p><strong>Methods: </strong>We evaluated the percentage of correct answers provided by 2 different versions of ChatGPT (GPT-3.5 and GPT-4) for all multiple-choice single-answer KPLE questions, excluding image-based questions. In total, 320, 317, and 323 questions from the 2021, 2022, and 2023 KPLEs, respectively, were included in the final analysis, which consisted of 4 units: Biopharmacy, Industrial Pharmacy, Clinical and Practical Pharmacy, and Medical Health Legislation.</p><p><strong>Results: </strong>The 3-year average percentage of correct answers was 86.5% (830/960) for GPT-4 and 60.7% (583/960) for GPT-3.5. GPT model accuracy was highest in Biopharmacy (GPT-3.5 77/96, 80.2% in 2022; GPT-4 87/90, 96.7% in 2021) and lowest in Medical Health Legislation (GPT-3.5 8/20, 40% in 2022; GPT-4 12/20, 60% in 2022). Additionally, when comparing the performance of artificial intelligence with that of human participants, pharmacy students outperformed GPT-3.5 but not GPT-4.</p><p><strong>Conclusions: </strong>In the last 3 years, GPT models have performed very close to or exceeded the passing threshold for the KPLE. This study demonstrates the potential of large language models in the pharmacy domain; however, extensive research is needed to evaluate their reliability and ensure their secure application in pharmacy contexts due to several inherent challenges. Addressing these limitations could make GPT models more effective auxiliary tools for pharmacy education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"10 ","pages":"e57451"},"PeriodicalIF":3.2,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11633516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142773237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}