{"title":"Enhancing Preclinical Training for Removable Partial Dentures Through Participatory 3D Simulation: Development and Usability Study.","authors":"Yikchi Siu, Hefei Bai, Jung-Min Yoon, Hongqiang Ye, Yunsong Liu, Yongsheng Zhou","doi":"10.2196/71743","DOIUrl":"10.2196/71743","url":null,"abstract":"<p><strong>Background: </strong>The integration of digital technology in dental education has been recognized for its potential to address the challenges in training removable partial denture (RPD) design. RPD framework design is crucial to long-term success in the treatment of dentition defects, but traditional training methods often fall short of adequately preparing students for real-world applications.</p><p><strong>Objective: </strong>This study aimed to evaluate the efficacy of a 3D simulation-based preclinical training software for RPDs in enhancing learning outcomes among first-year stomatology master's students, while also assessing user perceptions among students and faculty.</p><p><strong>Methods: </strong>RTS (Yikchi Siu) is a preclinical training software that simulates the clinical process of treating patients with partial edentulism. In this study, 26 newly enrolled master's degree students in stomatology who volunteered to participate were randomly divided into a control group (n=13) and a training group (n=13). The training group used the RTS for 2 credit hours (90 min) of self-study, while the control group received theoretical lessons and case practice from an instructor. After 2 hours, both groups completed the theoretical knowledge and drawing tests for RPD simultaneously. Test results were evaluated and graded by 2 experts in prosthodontics. Both users and teachers filled out a questionnaire afterward about their training experience.</p><p><strong>Results: </strong>Participants in the training group obtained better final grades compared to controls (theoretical test: 88.8, SD 2.3; 85.7, SD 3.3, respectively; P=.01; drawing test: 89.8, SD 4.5; 85.1, SD 4.3, respectively; P=.01). The training group had a shorter completion time in the drawing test (12.6, SD 19 min; 17.7, SD 3 min, respectively; P<.001) but there were no significant differences in the completion times in the theoretical test (23.2, SD 2.2 min; 24.9, SD 2.8 min, respectively; P=.14). Students and faculty generally had a favorable opinion of the RTS.</p><p><strong>Conclusions: </strong>The effectiveness of the RTS for newly enrolled master's degree students in stomatology to understand and apply their knowledge of RPD framework design was validated; the system was well received by both students and faculty members, who reported that it improved the effectiveness and convenience of teaching.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e71743"},"PeriodicalIF":3.2,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12454677/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145125998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perception of Medical Undergraduates on Artificial Intelligence in Medical Education: Qualitative Exploration.","authors":"Thilanka Seneviratne, Kaumudee Kodikara, Isuru Abeykoon, Wathsala Palpola","doi":"10.2196/73798","DOIUrl":"10.2196/73798","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) has revolutionized medical education by delivering tools that enhance and optimize learning. However, there is limited research on the medical students' perceptions regarding the effectiveness of AI as a learning tool, particularly in Sri Lanka.</p><p><strong>Objective: </strong>The study aimed to explore students' perceived barriers and limitations to using AI for learning as well as their expectations in terms of future use of AI in medical education.</p><p><strong>Methods: </strong>An exploratory qualitative study was conducted in September 2024, involving focus group discussions with medical students from two major universities in Sri Lanka. Reflexive thematic analysis was used to identify key themes and subthemes emerging from the discussions.</p><p><strong>Results: </strong>Thirty-eight medical students participated in 5 focus group discussions. The majority of the participants were Sinhalese female students. The perceived benefits included saving time and effort and collecting and summarizing information. However, concerns and limitations centered around inaccuracies of information provided and the negative impacts on critical thinking, social interactions (peer and student teacher), and long-term retention of knowledge. Students were confused about contradictory messages received from educators regarding the use of AI for teaching and learning. However, participants showed an enthusiasm for learning more about the ethical use of AI to enhance learning and indicated that basic AI knowledge should be taught in their undergraduate program.</p><p><strong>Conclusions: </strong>Participants recognized several benefits of AI-assisted learning but also expressed concerns and limitations requiring further studies for effective integration of AI into medical education. They expressed openness and enthusiasm for using AI while demonstrating confusion and reluctance due to the perspectives and stance of educators. We recommend educating both the educators and learners on the ethical use of AI, enabling a formal integration of AI tools into medical curricula.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e73798"},"PeriodicalIF":3.2,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12448566/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anila Jaleel, Umair Aziz, Ghulam Farid, Muhammad Zahid Bashir, Tehmasp Rehman Mirza, Syed Mohammad Khizar Abbas, Shiraz Aslam, Rana Muhammad Hassaan Sikander
{"title":"Evaluating the Potential and Accuracy of ChatGPT-3.5 and 4.0 in Medical Licensing and In-Training Examinations: Systematic Review and Meta-Analysis.","authors":"Anila Jaleel, Umair Aziz, Ghulam Farid, Muhammad Zahid Bashir, Tehmasp Rehman Mirza, Syed Mohammad Khizar Abbas, Shiraz Aslam, Rana Muhammad Hassaan Sikander","doi":"10.2196/68070","DOIUrl":"10.2196/68070","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) has significantly impacted health care, medicine, and radiology, offering personalized treatment plans, simplified workflows, and informed clinical decisions. ChatGPT (OpenAI), a conversational AI model, has revolutionized health care and medical education by simulating clinical scenarios and improving communication skills. However, inconsistent performance across medical licensing examinations and variability between countries and specialties highlight the need for further research on contextual factors influencing AI accuracy and exploring its potential to enhance technical proficiency and soft skills, making AI a reliable tool in patient care and medical education.</p><p><strong>Objective: </strong>This systematic review aims to evaluate and compare the accuracy and potential of ChatGPT-3.5 and 4.0 in medical licensing and in-training residency examinations across various countries and specialties.</p><p><strong>Methods: </strong>A systematic review and meta-analysis were conducted, adhering to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Data were collected from multiple reputable databases (Scopus, PubMed, JMIR Publications, Elsevier, BMJ, and Wiley Online Library), focusing on studies published from January 2023 to July 2024. Analysis specifically targeted research assessing ChatGPT's efficacy in medical licensing exams, excluding studies not related to this focus or published in languages other than English. Ultimately, 53 studies were included, providing a robust dataset for comparing the accuracy rates of ChatGPT-3.5 and 4.0.</p><p><strong>Results: </strong>ChatGPT-4 outperformed ChatGPT-3.5 in medical licensing exams, achieving a pooled accuracy of 81.8%, compared to ChatGPT-3.5's 60.8%. In in-training residency exams, ChatGPT-4 achieved an accuracy rate of 72.2%, compared to 57.7% for ChatGPT-3.5. The forest plot presented a risk ratio of 1.36 (95% CI 1.30-1.43), demonstrating that ChatGPT-4 was 36% more likely to provide correct answers than ChatGPT-3.5 across both medical licensing and residency exams. These results indicate that ChatGPT-4 significantly outperforms ChatGPT-3.5, but the performance advantage varies depending on the exam type. This highlights the importance of targeted improvements and further research to optimize ChatGPT-4's performance in specific educational and clinical settings.</p><p><strong>Conclusions: </strong>ChatGPT-4.0 and 3.5 show promising results in enhancing medical education and supporting clinical decision-making, but they cannot replace the comprehensive skill set required for effective medical practice. Future research should focus on improving AI's capabilities in interpreting complex clinical data and enhancing its reliability as an educational resource.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e68070"},"PeriodicalIF":3.2,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12495368/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145092539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance Evaluation of 18 Generative AI Models (ChatGPT, Gemini, Claude, and Perplexity) in 2024 Japanese Pharmacist Licensing Examination: Comparative Study.","authors":"Hiroyasu Sato, Katsuhiko Ogasawara, Hidehiko Sakurai","doi":"10.2196/76925","DOIUrl":"10.2196/76925","url":null,"abstract":"<p><strong>Background: </strong>Generative artificial intelligence (AI) has shown rapid advancements and increasing applications in various domains, including health care. Previous studies have evaluated AI performance on medical license examinations, primarily focusing on ChatGPT. However, the availability of new online chat-based large language models (OC-LLMs) and their potential utility in pharmacy licensing examinations remain underexplored. Considering that pharmacists require a broad range of expertise in physics, chemistry, biology, and pharmacology, verifying the knowledge base and problem-solving abilities of these new models in Japanese pharmacy examinations is necessary.</p><p><strong>Objective: </strong>This study aimed to assess the performance of 18 OC-LLMs released in 2024 in the 107th Japanese National License Examination for Pharmacists (JNLEP). Specifically, the study compared their accuracy and identified areas of improvement relative to earlier models.</p><p><strong>Methods: </strong>The 107th JNLEP, comprising 345 questions in Japanese, was used as a benchmark. Each OC-LLM was prompted by the original text-based questions, and images were uploaded where permitted. No additional prompt engineering or English translation was performed. For questions that included diagrams or chemical structures, the models incapable of image input were considered incorrect. The model outputs were compared with publicly available correct answers. The overall accuracy rates were calculated based on subject area (pharmacology and chemistry) and question type (text-only, diagram-based, calculation, and chemical structure). Fleiss' κ was used to measure answer consistency among the top-performing models.</p><p><strong>Results: </strong>Four flagship models-ChatGPT o1, Gemini 2.0 Flash, Claude 3.5 Sonnet (new), and Perplexity Pro-achieved 80% accuracy, surpassing the official passing threshold and average examinee score. A significant improvement in the overall accuracy was observed between the early and the latest 2024 models. Marked improvements were noted in text-only and diagram-based questions compared with those of earlier versions. However, the accuracy of chemistry-related and chemical structure questions remains relatively low. Fleiss' κ among the 4 flagship models was 0.334, which suggests moderate consistency but highlights variability in more complex questions.</p><p><strong>Conclusions: </strong>OC-LLMs have substantially improved their capacity to handle Japanese pharmacists' examination content, with several newer models achieving accuracy rates of >80%. Despite these advancements, even the best-performing models exhibit an error rate exceeding 10%, underscoring the ongoing need for careful human oversight in clinical settings. Overall, the 107th JNLEP will serve as a valuable benchmark for current and future generative AI evaluations in pharmacy licensing examinations.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e76925"},"PeriodicalIF":3.2,"publicationDate":"2025-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12445623/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145087509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roland Koch, Lena Gassner, Navina Gerlach, Teresa Festl-Wietek, Bernhard Hirt, Stefanie Joos, Thomas Shiozawa
{"title":"Integrated e-Learning for Shoulder Anatomy and Clinical Examination Skills in First-Year Medical Students: Randomized Controlled Trial.","authors":"Roland Koch, Lena Gassner, Navina Gerlach, Teresa Festl-Wietek, Bernhard Hirt, Stefanie Joos, Thomas Shiozawa","doi":"10.2196/62666","DOIUrl":"10.2196/62666","url":null,"abstract":"<p><strong>Background: </strong>Applying functional anatomy to clinical examination techniques in shoulder examination is challenging for physicians at all learning stages. Anatomy teaching has shifted toward a more function-oriented approach and has increasingly incorporated e-learning. There is limited evidence on whether the integrated teaching of professionalism, clinical examination technique, and functional anatomy via e-learning is effective.</p><p><strong>Objective: </strong>This study aimed to investigate the impact of an integrated blended learning course on the ability of first-year medical students to perform a shoulder examination on healthy volunteers.</p><p><strong>Methods: </strong>Based on Kolb's experiential learning theory, we designed a course on shoulder anatomy and clinical examination techniques that integrates preclinical and clinical content across all 4 stages of Kolb's learning cycle. The study is a randomized, observer-blinded controlled trial involving first-year medical students who are assigned to one of two groups. Both groups participated in blended learning courses; however, the intervention group's course combined clinical examination, anatomy, and professional behavior and included a peer-assisted practice session as well as a flipped classroom seminar. The control group's course combined an online lecture with self-study and self-examination. After completing the course, participants uploaded a video of their shoulder examination. The videos were scored by 2 blinded raters using a standardized examination checklist with a total score of 40.</p><p><strong>Results: </strong>Thirty-eight medical students were included from the 80 participants needed based on the power calculation. Seventeen intervention and 14 control students completed the 3-week study. The intervention group students scored a mean of 34.71 (SD 1.99). The control students scored a mean of 29.43 (SD 5.13). The difference of means was 5.3 points and proved to be statistically significant (P<.001; 2-sided Mann-Whitney U test).</p><p><strong>Conclusions: </strong>The study shows that anatomy, professional behavior, and clinical examination skills can also be taught in an integrated blended learning approach. For first-year medical students, this approach proved more effective than online lectures and self-study.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e62666"},"PeriodicalIF":3.2,"publicationDate":"2025-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443353/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145081789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isaraporn Thepwongsa, Pat Nonjui, Radhakrishnan Muthukumar, Poompong Sripa
{"title":"Impact of Motivational Interviewing Education on General Practitioners' and Trainees' Learning and Diabetes Outcomes in Primary Care: Mixed Methods Study.","authors":"Isaraporn Thepwongsa, Pat Nonjui, Radhakrishnan Muthukumar, Poompong Sripa","doi":"10.2196/75916","DOIUrl":"10.2196/75916","url":null,"abstract":"<p><strong>Background: </strong>Effective diabetes management requires behavioral change support from primary care providers. However, general practitioners (GPs) often lack training in patient-centered communication methods such as motivational interviewing (MI), especially in time-constrained settings. While brief MI offers a practical alternative, evidence on its impact among GPs and patient outcomes remains limited.</p><p><strong>Objective: </strong>This study aimed to evaluate the effectiveness of a structured MI educational program for GPs and GP trainees on their MI knowledge and confidence, and its impact on clinical outcomes among patients with type 2 diabetes in primary care settings.</p><p><strong>Methods: </strong>A mixed methods study was conducted using a before-and-after two-group design with quantitative assessments of GPs' knowledge and patients' biomarkers, supplemented by qualitative interviews. The intervention group (n=35) received a 4-hour interactive MI workshop, optional web-based modules, and brief MI guides. The control group received standard care. A total of 149 and 167 patients with diabetes were included in the study and control groups, respectively.</p><p><strong>Results: </strong>A paired-sample t test was conducted to evaluate the impact of the MI course on the learners' knowledge. There was a statistically significant difference in the knowledge test scores from Time 1 (mean 11.46, SD 3.48) to Time 2 (mean 15.04, SD 2.35), t28= -7.74; P<.001 (2-tailed). The mean increase in knowledge score was 3.57 (SD 2.44), with a 95% CI of 2.62 to 4.52, indicating a large and statistically significant effect. The eta-squared statistic indicated a large effect size (eta-squared=0.85). Patients in the intervention group had greater improvements in HbA1c (mean difference= -0.50, 95% CI -0.91 to -0.09; P=.02) and diastolic blood pressure (mean difference= -5.96 mmHg, 95% CI -8.66 to -3.25; P<.001) compared to controls. Qualitative feedback highlighted the usefulness of brief MI, along with challenges in mastering advanced techniques and time constraints.</p><p><strong>Conclusions: </strong>The MI educational program improved GP trainees' MI knowledge and patient outcomes. Brief MI appears feasible in primary care but requires ongoing support for skill development and implementation.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e75916"},"PeriodicalIF":3.2,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12443351/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145076245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anas Ali Alhur, Zuheir N Khlaif, Bilal Hamamra, Elham Hussein
{"title":"Paradox of AI in Higher Education: Qualitative Inquiry Into AI Dependency Among Educators in Palestine.","authors":"Anas Ali Alhur, Zuheir N Khlaif, Bilal Hamamra, Elham Hussein","doi":"10.2196/74947","DOIUrl":"10.2196/74947","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is increasingly embedded in medical education, providing benefits in instructional design, content creation, and administrative efficiency. Tools like ChatGPT are reshaping training and teaching practices in digital health. However, concerns about faculty overreliance highlight risks to pedagogical autonomy, cognitive engagement, and ethics. Despite global interest, there is limited empirical research on AI dependency among medical educators, particularly in underrepresented regions like the Global South.</p><p><strong>Objective: </strong>This study focused on Palestine and aimed to (1) identify factors contributing to AI dependency among medical educators, (2) assess its impact on teaching autonomy, decision-making, and professional identity, and (3) propose strategies for sustainable and responsible AI integration in digital medical education.</p><p><strong>Methods: </strong>A qualitative research design was used, using semistructured interviews (n=22) and focus group discussions (n=24) involving 46 medical educators from nursing, pharmacy, medicine, optometry, and dental sciences. Thematic analysis, supported by NVivo (QSR International), was conducted on 15.5 hours of transcribed data. Participants varied in their frequency of AI use: 45.7% (21/46) used AI daily, 30.4% (14/46) weekly, and 15.2% (7/46) monthly.</p><p><strong>Results: </strong>In total, 5 major themes were identified as drivers of AI dependency: institutional workload (reported by >80% [37/46] of participants), low academic confidence (noted by 28/46, 60%), and perfectionism-related stress (23/46, 50%). The following 6 broad consequences of AI overreliance were identified: Skills Atrophy (reported by 89% [41/46]): educators reported reduced critical thinking, scientific writing, and decision-making abilities. Pedagogical erosion (35/46, 76%): decreased student interaction and reduced teaching innovation. Motivational decline (31/46, 67%): increased procrastination and reduced intrinsic motivation. Ethical risks (24/46, 52%): concerns about plagiarism and overuse of AI-generated content. Social fragmentation (22/46, 48%): diminished peer collaboration and mentorship. Creativity suppression (20/46, 43%): reliance on AI for content generation diluted instructional originality., Strategies reported by participants to address these issues included establishing boundaries for AI use (n=41), fostering hybrid intelligence (n=37), and integrating AI literacy into teaching practices (n=39).</p><p><strong>Conclusions: </strong>While AI tools can enhance digital health instruction, unchecked reliance risks eroding essential clinician competencies. This study identifies cognitive, pedagogical, and ethical consequences of AI overuse in medical education and highlights the need for AI literacy, professional development, and ethical frameworks to ensure responsible and balanced integration.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e74947"},"PeriodicalIF":3.2,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12435755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Parra Larrotta, Erwin Hernando Hernández Rincón, Daniela Niño Correa, Claudia Liliana Jaimes Peñuela, Alvaro Enrique Romero Tapia
{"title":"Effects of the Hidden Curriculum in Medical Education: Scoping Review.","authors":"Sebastian Parra Larrotta, Erwin Hernando Hernández Rincón, Daniela Niño Correa, Claudia Liliana Jaimes Peñuela, Alvaro Enrique Romero Tapia","doi":"10.2196/68481","DOIUrl":"10.2196/68481","url":null,"abstract":"<p><strong>Background: </strong>Medical education now focuses on developing skilled and dependable professionals, with particular attention to the hidden curriculum and its influence on professionalism and humanism.</p><p><strong>Objective: </strong>This scoping review aimed to analyze the available evidence on the benefits and adverse effects of the hidden curriculum in medical education.</p><p><strong>Methods: </strong>A scoping review of the literature available in the indexed databases PubMed, Scopus, ScienceDirect, and Latin American and Caribbean Health Sciences Literature (LILACS) with MeSH (Medical Subject Headings) descriptors was conducted on the effects of the hidden curriculum in medical education between January 2000 and April 2024. A total of 29 papers were selected for the review.</p><p><strong>Results: </strong>Our review included studies from 10 countries, most of which were descriptive and cross-sectional, revealing both positive and negative impacts of the hidden curriculum in medical education. These include the transmission of implicit values and the influence on forming skills and professional identity. It was found that some elements contributed to the integral development of students, and others generated challenges that affected the quality of medical education. Likewise, the need for further research to design implementation strategies in different medical schools was described.</p><p><strong>Conclusions: </strong>The hidden curriculum proves to have both a positive and negative impact on the attitudes and values of medical students. The findings highlight the need to generate greater awareness and proactive strategies in educational institutions to improve the quality of training and promote the holistic development of future health professionals.</p>","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e68481"},"PeriodicalIF":3.2,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12481137/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145070848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Polanc, Inka Roesel, Elke Feil, Peter Martus, Stefanie Joos, Roland Koch
{"title":"Investigating Learning Effects Through the Implementation of Teledermatology Consultations Among General Practitioners in Germany: Mixed Methods Process Evaluation.","authors":"Andreas Polanc, Inka Roesel, Elke Feil, Peter Martus, Stefanie Joos, Roland Koch","doi":"10.2196/65915","DOIUrl":"10.2196/65915","url":null,"abstract":"<p><strong>Background: </strong>The increasing prevalence of dermatological diseases will pose a growing challenge to the health care system and, in particular, to general practitioners (GPs) as the first point of contact for these patients. In many countries, primary care physicians are supported by teledermatology services.</p><p><strong>Objective: </strong>The aim of this study was to detect learning effects and gains among GPs through teledermatology consultations (TCs) in daily practice.</p><p><strong>Methods: </strong>As part of a mixed methods study embedded in a cluster-randomized controlled trial (TeleDerm), a full survey and semiguided face-to-face interviews were conducted among GPs of participating intervention practices using the telemedicine approach. A TC assessment tool (TC-AT) was developed to evaluate the quality of clinical data and images of TCs conducted during the run-in and intervention phases, with a score ranging from 0 (lowest quality) to 10 (highest quality). Mixed methods analysis triangulated qualitative content analysis, survey data with a growth curve model calculated from TC-AT data, comparing subjective experiences of GPs with objective process data.</p><p><strong>Results: </strong>A total of 487 TCs of 33 practices were analyzed. Questionnaires from n=46 GPs (practice-level response rate: 69.9%) were included in the quantitative analysis. Two-thirds of the GPs (n=31; 67.4%) in the written survey rated the TCs as helpful for differential diagnosis and treatment management. Improved self-reported confidence in diagnosing skin diseases due to the timely clinical feedback from dermatologists was reported by more than half of the responding GPs (n=25; 54.3%). In the interviews (n=13), teleconsultations were mainly seen as a learning opportunity by the GPs. Regarding the quality of TCs, a mean TC-AT score of 7.4 (SD 1.7, range 0-10) was observed. In the growth curve model, a simple linear time trend provided the best fit to the TC-AT score trajectory across the observed study period. A significant time * TC-AT start score interaction was found (F452=30.66, P<.001). While regardless of the initial TC-AT score, repeated TCs lead to process quality improvements over time, post hoc probing of the TC-AT start score as a moderator of the learning effect over time revealed the highest improvements among GP practices with a lower initial TC-AT score (-1 SD: standardized slope=0.59, P<.001; mean: standardized slope=0.38, P<.001; +1 SD: standardized slope=0.18, P<.001).</p><p><strong>Conclusions: </strong>TCs have been shown to be an effective method of education for GPs in terms of \"learning on the job\" in daily practice. The telemedicine approach seems to be an easily implementable and effective tool to support continuing medical education in the field of dermatology. Strategies could be developed to train GPs and medical students in the use of TC to adequately prepare them for the increasing technological demands of their fut","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":"11 ","pages":"e65915"},"PeriodicalIF":3.2,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12422744/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145034239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bastien Le Guellec, Victoria Gauthier, Rémi Lenain, Alexandra Nuytten, Luc Dauchet, Brigitte Bonneau, Erwin Gerard, Claire Castandet, Patrick Truffert, Marc Hazzan, Philippe Amouyel, Raphaël Bentegeac, Aghiles Hamroun
{"title":"Engaging Undergraduate Medical Students with Introductory Research Training via an Educational Escape Room: A Mixed-Methods Evaluation of Engagement and Perception.","authors":"Bastien Le Guellec, Victoria Gauthier, Rémi Lenain, Alexandra Nuytten, Luc Dauchet, Brigitte Bonneau, Erwin Gerard, Claire Castandet, Patrick Truffert, Marc Hazzan, Philippe Amouyel, Raphaël Bentegeac, Aghiles Hamroun","doi":"10.2196/71339","DOIUrl":"https://doi.org/10.2196/71339","url":null,"abstract":"<p><strong>Background: </strong>Early exposure to research methodology is essential in medical education, yet many students show limited motivation to engage with non-clinical content. Gamified strategies such as educational escape rooms (EERs) may help improve engagement, but few studies have explored their feasibility at scale or evaluated their impact beyond student satisfaction.</p><p><strong>Objective: </strong>To assess the feasibility, engagement, and perceived educational value of a large-scale escape room specifically designed to introduce third-year medical students to the principles of diagnostic test evaluation.</p><p><strong>Methods: </strong>We developed a low-cost immersive escape room based on a fictional diagnostic accuracy study, with six puzzles mapped to five predefined learning objectives: (1) identifying key components of a diagnostic study protocol, (2) selecting an appropriate gold-standard test, (3) defining a relevant study population, (4) building and interpreting a contingency table, and (5) critically appraising diagnostic metrics in context. The intervention was deployed to an entire class of third-year medical students across 12 sessions between March and April 2023. Each session included 60 minutes of gameplay and a 45-minute debriefing. Students completed pre-/post-intervention questionnaires assessing their knowledge of diagnostic test evaluation and perceptions of research training. Descriptive statistics and paired t-tests were used to evaluate score changes; univariate linear regressions assessed associations with demographics. Free-text comments were analyzed using Reinert's hierarchical classification.</p><p><strong>Results: </strong>Among 530 participants, 490 completed the full evaluation. Many participants had limited prior exposure to escape rooms (206/490, 42% had never participated), and most reported low initial confidence with critical appraisal of scientific articles. All student teams completed the scenario, with a mean completion time of 53 (±4) minutes. Mean overall knowledge scores increased from 62/100 (±1) before to 82/100 (±2) after the activity (+32%, p<0.001). Gains were observed across all learning objectives and were not influenced by age, sex, or prior experience. Students rated the EER as highly entertaining (9.1±1.1/10) and educational (8.2±1.5/10). Following the intervention, 87% (393/452) felt more comfortable with critical appraisal of diagnostic test studies, and 79% (357/452) considered the escape room format highly appropriate for an introductory session. Thematic analysis of open-ended feedback identified six clusters, including engagement, teamwork, and perceived usefulness of the pedagogical approach. Word clouds showed a marked shift from negative to positive attitudes toward research training.</p><p><strong>Conclusions: </strong>This study demonstrates the feasibility and enthusiastic reception of a large-scale, reusable escape room aimed at teaching the fundamental principl","PeriodicalId":36236,"journal":{"name":"JMIR Medical Education","volume":" ","pages":""},"PeriodicalIF":3.2,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145201758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}