{"title":"ChatGPT's Performance on Portuguese Medical Examination Questions: Comparative Analysis of ChatGPT-3.5 Turbo and ChatGPT-4o Mini.","authors":"Filipe Prazeres","doi":"10.2196/65108","DOIUrl":"10.2196/65108","url":null,"abstract":"<p><strong>Background: </strong>Advancements in ChatGPT are transforming medical education by providing new tools for assessment and learning, potentially enhancing evaluations for doctors and improving instructional effectiveness.</p><p><strong>Objective: </strong>This study evaluates the performance and consistency of ChatGPT-3.5 Turbo and ChatGPT-4o mini in solving European Portuguese medical examination questions (2023 National Examination for Access to Specialized Training; Prova Nacional de Acesso à Formação Especializada [PNA]) and compares their performance to human candidates.</p><p><strong>Methods: </strong>ChatGPT-3.5 Turbo was tested on the first part of the examination (74 questions) on July 18, 2024, and ChatGPT-4o mini on the second part (74 questions) on July 19, 2024. Each model generated an answer using its natural language processing capabilities. To test consistency, each model was asked, \"Are you sure?\" after providing an answer. Differences between the first and second responses of each model were analyzed using the McNemar test with continuity correction. A single-parameter t test compared the models' performance to human candidates. Frequencies and percentages were used for categorical variables, and means and CIs for numerical variables. Statistical significance was set at P<.05.</p><p><strong>Results: </strong>ChatGPT-4o mini achieved an accuracy rate of 65% (48/74) on the 2023 PNA examination, surpassing ChatGPT-3.5 Turbo. ChatGPT-4o mini outperformed medical candidates, while ChatGPT-3.5 Turbo had a more moderate performance.</p><p><strong>Conclusions: </strong>This study highlights the advancements and potential of ChatGPT models in medical education, emphasizing the need for careful implementation with teacher oversight and further research.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e65108"},"PeriodicalIF":3.2,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11902880/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Berin Doru, Christoph Maier, Johanna Sophie Busse, Thomas Lücke, Judith Schönhoff, Elena Enax-Krumova, Steffen Hessler, Maria Berger, Marianne Tokic
{"title":"Detecting Artificial Intelligence-Generated Versus Human-Written Medical Student Essays: Semirandomized Controlled Study.","authors":"Berin Doru, Christoph Maier, Johanna Sophie Busse, Thomas Lücke, Judith Schönhoff, Elena Enax-Krumova, Steffen Hessler, Maria Berger, Marianne Tokic","doi":"10.2196/62779","DOIUrl":"10.2196/62779","url":null,"abstract":"<p><strong>Background: </strong>Large language models, exemplified by ChatGPT, have reached a level of sophistication that makes distinguishing between human- and artificial intelligence (AI)-generated texts increasingly challenging. This has raised concerns in academia, particularly in medicine, where the accuracy and authenticity of written work are paramount.</p><p><strong>Objective: </strong>This semirandomized controlled study aims to examine the ability of 2 blinded expert groups with different levels of content familiarity-medical professionals and humanities scholars with expertise in textual analysis-to distinguish between longer scientific texts in German written by medical students and those generated by ChatGPT. Additionally, the study sought to analyze the reasoning behind their identification choices, particularly the role of content familiarity and linguistic features.</p><p><strong>Methods: </strong>Between May and August 2023, a total of 35 experts (medical: n=22; humanities: n=13) were each presented with 2 pairs of texts on different medical topics. Each pair had similar content and structure: 1 text was written by a medical student, and the other was generated by ChatGPT (version 3.5, March 2023). Experts were asked to identify the AI-generated text and justify their choice. These justifications were analyzed through a multistage, interdisciplinary qualitative analysis to identify relevant textual features. Before unblinding, experts rated each text on 6 characteristics: linguistic fluency and spelling/grammatical accuracy, scientific quality, logical coherence, expression of knowledge limitations, formulation of future research questions, and citation quality. Univariate tests and multivariate logistic regression analyses were used to examine associations between participants' characteristics, their stated reasons for author identification, and the likelihood of correctly determining a text's authorship.</p><p><strong>Results: </strong>Overall, in 48 out of 69 (70%) decision rounds, participants accurately identified the AI-generated texts, with minimal difference between groups (medical: 31/43, 72%; humanities: 17/26, 65%; odds ratio [OR] 1.37, 95% CI 0.5-3.9). While content errors had little impact on identification accuracy, stylistic features-particularly redundancy (OR 6.90, 95% CI 1.01-47.1), repetition (OR 8.05, 95% CI 1.25-51.7), and thread/coherence (OR 6.62, 95% CI 1.25-35.2)-played a crucial role in participants' decisions to identify a text as AI-generated.</p><p><strong>Conclusions: </strong>The findings suggest that both medical and humanities experts were able to identify ChatGPT-generated texts in medical contexts, with their decisions largely based on linguistic attributes. The accuracy of identification appears to be independent of experts' familiarity with the text content. As the decision-making process primarily relies on linguistic attributes-such as stylistic features and text coherence-further quasi-ex","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e62779"},"PeriodicalIF":3.2,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11914838/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143575840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of an Emergency Medicine Asynchronous Learning Platform Usage Before and During the COVID-19 Pandemic: Retrospective Analysis Study.","authors":"Blake Briggs, Madhuri Mulekar, Hannah Morales, Iltifat Husain","doi":"10.2196/58100","DOIUrl":"10.2196/58100","url":null,"abstract":"<p><strong>Background: </strong>The COVID-19 pandemic challenged medical educators due to social distancing. Podcasts and asynchronous learning platforms help distill medical education in a socially distanced environment. Medical educators interested in providing asynchronous teaching should know how these methods performed during the pandemic.</p><p><strong>Objective: </strong>The purpose of this study was to assess the level of engagement for an emergency medicine (EM) board review podcast and website platform, before and during the COVID-19 pandemic. We measured engagement via website traffic, including such metrics as visits, bounce rate, unique visitors, and page views. We also evaluated podcast analytics, which included total listeners, engaged listeners, and number of plays.</p><p><strong>Methods: </strong>Content was designed after the American Board of EM Model, covering only 1 review question per episode. Website traffic and podcast analytics were studied monthly from 2 time periods of 20 months each, before the pandemic (July 11, 2018, to February 31, 2020) and during the pandemic (May 1, 2020, to December 31, 2021). March and April 2020 data were omitted from the analysis due to variations in closure at various domestic and international locations. Results underwent statistical analysis in March 2022.</p><p><strong>Results: </strong>A total of 132 podcast episodes and 93 handouts were released from July 11, 2018, to December 31, 2021. The mean number of listeners per podcast increased significantly from 2.11 (SD 1.19) to 3.77 (SD 0.76; t test, P<.001), the mean number engaged per podcast increased from 1.72 (SD 1.00) to 3.09 (SD 0.62; t test, P<.001), and the mean number of plays per podcast increased from 42.54 (SD 40.66) to 69.23 (SD 17.54; t test, P=.012). Similarly, the mean number of visits per posting increased from 5.85 (SD 3.28) to 15.39 (SD 3.06; t test, P<.001), the mean number of unique visitors per posting increased from 3.74 (SD 1.83) to 10.41 (SD 2.33; t test, P<.001), and the mean number of page views per posting increased from 17.13 (SD 10.63) to 33.32 (SD 7.01; t test, P<.001). Note that, all measures showed a decrease from November 2021 to December 2021.</p><p><strong>Conclusions: </strong>During the COVID-19 pandemic, there was an increased engagement for our EM board review podcast and website platform over a long-term period, specifically through website visitors and the number of podcast plays. Medical educators should be aware of the increasing usage of web-based education tools, and that asynchronous learning is favorably viewed by learners. Limitations include the inability to view Spotify (Spotify Technology S.A.) analytics during the study period, and confounding factors like increased popularity of social media inadvertently promoting the podcast.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e58100"},"PeriodicalIF":3.2,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11870596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring Social Media Use Among Medical Students Applying for Residency Training: Cross-Sectional Survey Study.","authors":"Simi Jandu, Jennifer L Carey","doi":"10.2196/59417","DOIUrl":"10.2196/59417","url":null,"abstract":"<p><strong>Background: </strong>Since the COVID-19 pandemic, residency candidates have moved from attending traditional in-person interviews to virtual interviews with residency training programs. This transition spurred increased social media engagement by residency candidates, in an effort to learn about prospective programs, and by residency programs, to improve recruitment efforts. There is a paucity of literature on the effectiveness of social media outreach and its impact on candidates' perceptions of residency programs.</p><p><strong>Objective: </strong>We aimed to determine patterns of social media platform usage among prospective residency candidates and social media's influence on students' perceptions of residency programs.</p><p><strong>Methods: </strong>A cross-sectional survey was administered anonymously to fourth-year medical students who successfully matched to a residency training program at a single institution in 2023. These data were analyzed using descriptive statistics, as well as thematic analysis for open-ended questions.</p><p><strong>Results: </strong>Of the 148 eligible participants, 69 (46.6%) responded to the survey, of whom 45 (65.2%) used social media. Widely used social media platforms were Instagram (19/40, 47.5%) and Reddit (18/40, 45%). Social media influenced 47.6% (20/42) of respondents' opinions of programs and had a moderate or major effect on 26.2% (11/42) of respondents' decisions on program ranking. Resident-faculty relations and social events showcasing camaraderie and wellness were the most desired content.</p><p><strong>Conclusions: </strong>Social media is used by the majority of residency candidates during the residency application process and influences residency program ranking. This highlights the importance of residency programs in leveraging social media usage to recruit applicants and provide information that allows the candidate to better understand the program.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e59417"},"PeriodicalIF":3.2,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11869503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143469417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noura Abouammoh, Khalid Alhasan, Fadi Aljamaan, Rupesh Raina, Khalid H Malki, Ibraheem Altamimi, Ruaim Muaygil, Hayfaa Wahabi, Amr Jamal, Ali Alhaboob, Rasha Assad Assiri, Jaffar A Al-Tawfiq, Ayman Al-Eyadhy, Mona Soliman, Mohamad-Hani Temsah
{"title":"Perceptions and Earliest Experiences of Medical Students and Faculty With ChatGPT in Medical Education: Qualitative Study.","authors":"Noura Abouammoh, Khalid Alhasan, Fadi Aljamaan, Rupesh Raina, Khalid H Malki, Ibraheem Altamimi, Ruaim Muaygil, Hayfaa Wahabi, Amr Jamal, Ali Alhaboob, Rasha Assad Assiri, Jaffar A Al-Tawfiq, Ayman Al-Eyadhy, Mona Soliman, Mohamad-Hani Temsah","doi":"10.2196/63400","DOIUrl":"10.2196/63400","url":null,"abstract":"<p><strong>Background: </strong>With the rapid development of artificial intelligence technologies, there is a growing interest in the potential use of artificial intelligence-based tools like ChatGPT in medical education. However, there is limited research on the initial perceptions and experiences of faculty and students with ChatGPT, particularly in Saudi Arabia.</p><p><strong>Objective: </strong>This study aimed to explore the earliest knowledge, perceived benefits, concerns, and limitations of using ChatGPT in medical education among faculty and students at a leading Saudi Arabian university.</p><p><strong>Methods: </strong>A qualitative exploratory study was conducted in April 2023, involving focused meetings with medical faculty and students with varying levels of ChatGPT experience. A thematic analysis was used to identify key themes and subthemes emerging from the discussions.</p><p><strong>Results: </strong>Participants demonstrated good knowledge of ChatGPT and its functions. The main themes were perceptions of ChatGPT use, potential benefits, and concerns about ChatGPT in research and medical education. The perceived benefits included collecting and summarizing information and saving time and effort. However, concerns and limitations centered around the potential lack of critical thinking in the information provided, the ambiguity of references, limitations of access, trust in the output of ChatGPT, and ethical concerns.</p><p><strong>Conclusions: </strong>This study provides valuable insights into the perceptions and experiences of medical faculty and students regarding the use of newly introduced large language models like ChatGPT in medical education. While the benefits of ChatGPT were recognized, participants also expressed concerns and limitations requiring further studies for effective integration into medical education, exploring the impact of ChatGPT on learning outcomes, student and faculty satisfaction, and the development of critical thinking skills.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e63400"},"PeriodicalIF":3.2,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888024/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandro Escamilla-Sanchez, Juan Antonio López-Villodres, Carmen Alba-Tercedor, María Victoria Ortega-Jiménez, Francisca Rius-Díaz, Raquel Sanchez-Varo, Diego Bermúdez
{"title":"Instagram as a Tool to Improve Human Histology Learning in Medical Education: Descriptive Study.","authors":"Alejandro Escamilla-Sanchez, Juan Antonio López-Villodres, Carmen Alba-Tercedor, María Victoria Ortega-Jiménez, Francisca Rius-Díaz, Raquel Sanchez-Varo, Diego Bermúdez","doi":"10.2196/55861","DOIUrl":"10.2196/55861","url":null,"abstract":"<p><strong>Background: </strong>Student development is currently taking place in an environment governed by new technologies and social media. Some platforms, such as Instagram or X (previously known as \"Twitter\"), have been incorporated as additional tools for teaching and learning processes in higher education, especially in the framework of image-based applied disciplines, including radiology and pathology. Nevertheless, the role of social media in the teaching of core subjects such as histology has hardly been studied, and there are very few reports on this issue.</p><p><strong>Objective: </strong>The aim of this work was to investigate the impact of implementing social media on the ability to learn human histology. For this purpose, a set of voluntary e-learning activities was shared on Instagram as a complement to traditional face-to-face teaching.</p><p><strong>Methods: </strong>The proposal included questionnaires based on multiple-choice questions, descriptions of histological images, and schematic diagrams about the subject content. These activities were posted on an Instagram account only accessible by second-year medical students from the University of Malaga. In addition, students could share their own images taken during the laboratory practice and interact with their peers.</p><p><strong>Results: </strong>Of the students enrolled in Human Histology 2, 85.6% (143/167) agreed to participate in the platform. Most of the students valued the initiative positively and considered it an adequate instrument to improve their final marks. Specifically, 68.5% (98/143) of the student body regarded the multiple-choice questions and image-based questions as the most useful activities. Interestingly, there were statistically significant differences between the marks on the final exam (without considering other evaluation activities) for students who participated in the activity compared with those who did not or barely participated in the activity (P<.001). There were no significant differences by degree of participation between the more active groups.</p><p><strong>Conclusions: </strong>These results provide evidence that incorporating social media may be considered a useful, easy, and accessible tool to improve the learning of human histology in the context of medical degrees.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e55861"},"PeriodicalIF":3.2,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888019/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143459947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guadalupe Esmeralda Rivera García, Miriam Janet Cervantes López, Juan Carlos Ramírez Vázquez, Arturo Llanes Castillo, Jaime Cruz Casados
{"title":"Reviewing Mobile Apps for Teaching Human Anatomy: Search and Quality Evaluation Study.","authors":"Guadalupe Esmeralda Rivera García, Miriam Janet Cervantes López, Juan Carlos Ramírez Vázquez, Arturo Llanes Castillo, Jaime Cruz Casados","doi":"10.2196/64550","DOIUrl":"10.2196/64550","url":null,"abstract":"<p><strong>Background: </strong>Mobile apps designed for teaching human anatomy offer a flexible, interactive, and personalized learning platform, enriching the educational experience for both students and health care professionals.</p><p><strong>Objective: </strong>This study aimed to conduct a systematic review of the human anatomy mobile apps available on Google Play, evaluate their quality, highlight the highest scoring apps, and determine the relationship between objective quality ratings and subjective star ratings.</p><p><strong>Methods: </strong>The Mobile App Rating Scale (MARS) was used to evaluate the apps. The intraclass correlation coefficient was calculated using a consistency-type 2-factor random model to measure the reliability of the evaluations made by the experts. In addition, Pearson correlations were used to analyze the relationship between MARS quality scores and subjective evaluations of MARS quality item 23.</p><p><strong>Results: </strong>The mobile apps with the highest overall quality scores according to the MARS (ie, sections A, B, C, and D) were Organos internos 3D (anatomía) (version 4.34), Sistema óseo en 3D (Anatomía) (version 4.32), and VOKA Anatomy Pro (version 4.29). To measure the reliability of the MARS quality evaluations (sections A, B, C, and D), the intraclass correlation coefficient was used, and the result was \"excellent.\" Finally, Pearson correlation results revealed a significant relationship (r=0.989; P<.001) between the quality assessments conducted by health care professionals and the subjective evaluations of item 23.</p><p><strong>Conclusions: </strong>The average evaluation results of the selected apps indicated a \"good\" level of quality, and those with the highest ratings could be recommended. However, the lack of scientific backing for these technological tools is evident. It is crucial that research centers and higher education institutions commit to the active development of new mobile health apps, ensuring their accessibility and validation for the general public.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e64550"},"PeriodicalIF":3.2,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888001/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143417021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integration of an Audiovisual Learning Resource in a Podiatric Medical Infectious Disease Course: Multiple Cohort Pilot Study.","authors":"Garrik Hoyt, Chandra Shekhar Bakshi, Paramita Basu","doi":"10.2196/55206","DOIUrl":"10.2196/55206","url":null,"abstract":"<p><strong>Background: </strong>Improved long-term learning retention leads to higher exam scores and overall course grades, which is crucial for success in preclinical coursework in any podiatric medicine curriculum. Audiovisual mnemonics, in conjunction with text-based materials and an interactive user interface, have been shown to increase memory retention and higher order thinking.</p><p><strong>Objective: </strong>This pilot study aims to evaluate the effectiveness of integrating web-based multimedia learning resources for improving student engagement and increasing learning retention.</p><p><strong>Methods: </strong>A quasi-experimental study was conducted with 2 cohorts totaling 158 second-year podiatric medical students. The treatment group had access to Picmonic's audiovisual resources, while the control group followed traditional instruction methods. Exam scores, final course grades, and user interactions with Picmonic were analyzed. Logistic regression and correlation analyses were conducted to examine the relationships between Picmonic access, performance outcomes, and student engagement.</p><p><strong>Results: </strong>The treatment group (n=91) had significantly higher average exam scores (P<.001) and final course grades (P<.001) than the control group (n=67). Effect size for the average final grades (d=0.96) indicated the practical significance of these differences. Logistic regression analysis revealed a positive association between Picmonic access with an odds ratio of 2.72 with a 95% confidence interval, indicating that it is positively associated with the likelihood of achieving high final grades. Correlation analysis revealed a positive relationship (r=0.25, P=.02) between the number of in-video questions answered and students' final grades. Survey responses reflected increased student engagement, comprehension, and higher user satisfaction (3.71 out of 5 average rating) with the multimedia-based resources compared to traditional instructional resources.</p><p><strong>Conclusions: </strong>This pilot study underscores the positive impact of animation-supported web-based instruction on preclinical medical education. The treatment group, equipped with Picmonic, exhibited improved learning outcomes, enhanced engagement, and high satisfaction. These results contribute to the discourse on innovative educational methods and highlight the potential of multimedia-based learning resources to enrich medical curricula. Despite certain limitations, this research suggests that animation-supported audiovisual instruction offers a valuable avenue for enhancing student learning experiences in medical education.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e55206"},"PeriodicalIF":3.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wen Chang, Chun-Chih Lin, Julia Crilly, Hui-Ling Lee, Li-Chin Chen, Chin-Yen Han
{"title":"Virtual Reality Simulation for Undergraduate Nursing Students for Care of Patients With Infectious Diseases: Mixed Methods Study.","authors":"Wen Chang, Chun-Chih Lin, Julia Crilly, Hui-Ling Lee, Li-Chin Chen, Chin-Yen Han","doi":"10.2196/64780","DOIUrl":"10.2196/64780","url":null,"abstract":"<p><strong>Background: </strong>Virtual reality simulation (VRS) teaching offers nursing students a safe, immersive learning environment with immediate feedback, enhancing learning outcomes. Before the COVID-19 pandemic, nursing students had limited training and opportunities to care for patients in isolation units with infectious diseases. However, the pandemic highlighted the ongoing global priority of providing care for patients with infectious diseases.</p><p><strong>Objective: </strong>This study aims to (1) examine the effectiveness of VRS in preparing nursing students to care for patients with infectious diseases by assessing its impact on their theoretical knowledge, learning motivation, and attitudes; and (2) evaluate their experiences with VRS.</p><p><strong>Methods: </strong>This 2-phased mixed methods study recruited third-year undergraduate nursing students enrolled in the Integrated Emergency and Critical Care course at a university in Taiwan. Phase 1 used a quasi-experimental design to address objective 1 by comparing the learning outcomes of students in the VRS teaching program (experimental group) with those in the traditional teaching program (control group). Tools included an infection control written test, the Instructional Materials Motivation Survey, and a learning attitude questionnaire. The experimental group participated in a VRS lesson titled \"Caring for a Patient with COVID-19 in the Negative Pressure Unit\" as part of the infection control unit. In phase 2, semistructured interviews were conducted to address objective 2, exploring students' learning experiences.</p><p><strong>Results: </strong>A total of 107 students participated in phase 1, and 18 students participated in phase 2. Both the VRS and control groups showed significant improvements in theoretical knowledge scores (for the VRS group t<sub>46</sub>=-7.47; P<.001, for the control group t<sub>59</sub>=-4.04; P<.001). However, compared with the control group, the VRS group achieved significantly higher theoretical knowledge scores (t<sub>98.13</sub>=2.70; P=.008) and greater learning attention (t<sub>105</sub>=2.30; P=.02) at T1. Additionally, the VRS group demonstrated a statistically significant higher regression coefficient for learning confidence compared with the control group (β=.29; P=.03). The students' learning experiences in the VRS group were categorized into 4 themes: Applying Professional Knowledge to Patient Care, Enhancing Infection Control Skills, Demonstrating Patient Care Confidence, and Engaging in Real Clinical Cases. The core theme identified was Strengthening Clinical Patient Care Competencies.</p><p><strong>Conclusions: </strong>The findings suggest that VRS teaching significantly enhanced undergraduate nursing students' infection control knowledge, learning attention, and confidence. Qualitative insights reinforced the quantitative results, highlighting the holistic benefits of VRS teaching in nursing education, including improved learnin","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e64780"},"PeriodicalIF":3.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11862763/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsunagu Ichikawa, Elizabeth Olsen, Arathi Vinod, Noah Glenn, Karim Hanna, Gregg C Lund, Stacey Pierce-Talsma
{"title":"Generative Artificial Intelligence in Medical Education-Policies and Training at US Osteopathic Medical Schools: Descriptive Cross-Sectional Survey.","authors":"Tsunagu Ichikawa, Elizabeth Olsen, Arathi Vinod, Noah Glenn, Karim Hanna, Gregg C Lund, Stacey Pierce-Talsma","doi":"10.2196/58766","DOIUrl":"10.2196/58766","url":null,"abstract":"<p><strong>Background: </strong>Interest has recently increased in generative artificial intelligence (GenAI), a subset of artificial intelligence that can create new content. Although the publicly available GenAI tools are not specifically trained in the medical domain, they have demonstrated proficiency in a wide range of medical assessments. The future integration of GenAI in medicine remains unknown. However, the rapid availability of GenAI with a chat interface and the potential risks and benefits are the focus of great interest. As with any significant medical advancement or change, medical schools must adapt their curricula to equip students with the skills necessary to become successful physicians. Furthermore, medical schools must ensure that faculty members have the skills to harness these new opportunities to increase their effectiveness as educators. How medical schools currently fulfill their responsibilities is unclear. Colleges of Osteopathic Medicine (COMs) in the United States currently train a significant proportion of the total number of medical students. These COMs are in academic settings ranging from large public research universities to small private institutions. Therefore, studying COMs will offer a representative sample of the current GenAI integration in medical education.</p><p><strong>Objective: </strong>This study aims to describe the policies and training regarding the specific aspect of GenAI in US COMs, targeting students, faculty, and administrators.</p><p><strong>Methods: </strong>Web-based surveys were sent to deans and Student Government Association (SGA) presidents of the main campuses of fully accredited US COMs. The dean survey included questions regarding current and planned policies and training related to GenAI for students, faculty, and administrators. The SGA president survey included only those questions related to current student policies and training.</p><p><strong>Results: </strong>Responses were received from 81% (26/32) of COMs surveyed. This included 47% (15/32) of the deans and 50% (16/32) of the SGA presidents (with 5 COMs represented by both the deans and the SGA presidents). Most COMs did not have a policy on the student use of GenAI, as reported by the dean (14/15, 93%) and the SGA president (14/16, 88%). Of the COMs with no policy, 79% (11/14) had no formal plans for policy development. Only 1 COM had training for students, which focused entirely on the ethics of using GenAI. Most COMs had no formal plans to provide mandatory (11/14, 79%) or elective (11/15, 73%) training. No COM had GenAI policies for faculty or administrators. Eighty percent had no formal plans for policy development. Furthermore, 33.3% (5/15) of COMs had faculty or administrator GenAI training. Except for examination question development, there was no training to increase faculty or administrator capabilities and efficiency or to decrease their workload.</p><p><strong>Conclusions: </strong>The survey revealed that most ","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e58766"},"PeriodicalIF":3.2,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11835596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}