Medical Science EducatorPub Date : 2026-03-16eCollection Date: 2026-02-01DOI: 10.1007/s40670-026-02702-x
David Harris
{"title":"Letter from the Editor-in-Chief.","authors":"David Harris","doi":"10.1007/s40670-026-02702-x","DOIUrl":"https://doi.org/10.1007/s40670-026-02702-x","url":null,"abstract":"","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"1"},"PeriodicalIF":1.8,"publicationDate":"2026-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043829/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147624073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-02-07eCollection Date: 2025-12-01DOI: 10.1007/s40670-026-02660-4
David Harris
{"title":"Letter from the Editor-in-Chief.","authors":"David Harris","doi":"10.1007/s40670-026-02660-4","DOIUrl":"10.1007/s40670-026-02660-4","url":null,"abstract":"","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"35 6","pages":"2677"},"PeriodicalIF":1.8,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12960875/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147379066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Harnessing Generative Artificial Intelligence to Teach Assessment Literacy in Anatomy.","authors":"Alicia Morgan, Marli Crabtree, Caitlin Sachsenmeier, Amanda Troy","doi":"10.1007/s40670-026-02659-x","DOIUrl":"https://doi.org/10.1007/s40670-026-02659-x","url":null,"abstract":"<p><p>Incoming medical students attended an anatomy bootcamp introducing prompt engineering and critical appraisal of AI-generated questions. MBS students completed an AI-assisted item-writing assignment requiring generation, appraisal, and revision that was used to create their unit review session. Both experiences nurtured ownership, assessment literacy, and critical reasoning while reinforcing anatomy learning.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"3-4"},"PeriodicalIF":1.8,"publicationDate":"2026-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043933/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147624123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-01-16eCollection Date: 2026-02-01DOI: 10.1007/s40670-026-02639-1
Giuliano Romano, Emilio Romano, Michelle Rau
{"title":"Can Large Language Models Replicate Systematic Review Outcome Classifications in Medical Education? A Pilot Study Using Kirkpatrick Levels.","authors":"Giuliano Romano, Emilio Romano, Michelle Rau","doi":"10.1007/s40670-026-02639-1","DOIUrl":"https://doi.org/10.1007/s40670-026-02639-1","url":null,"abstract":"<p><p>Systematic reviews in medical education often classify outcomes using the Kirkpatrick framework, but manual coding is time-consuming and subjective. We conducted a proof-of-concept study testing ChatGPT (GPT-5, August 2025 release) on 32 full-text articles from a published systematic review of sepsis education. Agreement with human-coded outcomes was modest: 50% percent agreement, unweighted κ = 0.170 (95% CI 0.000-0.458), weighted κ = 0.351 (95% CI 0.074-0.629). Most disagreements were between adjacent levels.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s40670-026-02639-1.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"11-15"},"PeriodicalIF":1.8,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043860/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147623979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-01-14eCollection Date: 2026-02-01DOI: 10.1007/s40670-025-02635-x
Stephanie N Moore-Lotridge, Brianne E Lewis
{"title":"Catalysts of Change: Using AI To Lower the Activation Energy for Developing Gamified Learning Experiences in Health Profession Education.","authors":"Stephanie N Moore-Lotridge, Brianne E Lewis","doi":"10.1007/s40670-025-02635-x","DOIUrl":"https://doi.org/10.1007/s40670-025-02635-x","url":null,"abstract":"<p><p>This monograph provides a practical guide for using generative AI, such as ChatGPT, to support the design and implementation of gamified learning in health professions education. By lowering the \"activation energy\" required to create immersive educational experiences, generative AI can help educators overcome common barriers such as available time, resources, and expertise. Here, we will examine a range of gamified learning strategies and illustrate how generative AI and prompt engineering can be used to support their design and implementation. While generative AI enhances creativity and workflow efficiency, faculty oversight remains essential to ensure accuracy, relevance, and alignment with educational goals.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"27-38"},"PeriodicalIF":1.8,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043820/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147623977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-01-08eCollection Date: 2026-02-01DOI: 10.1007/s40670-025-02633-z
Kin Ly, Susan Ely, Erica Ausel, Andy Brenneman, Steve Garwood, Catherine Gathu, Mark Hernandez, Uzoma Ikonne, Douglas McKell, Akshata R Naik, Rebecca Rowe, Tracey A H Taylor, Thomas Thesen
{"title":"Environmental Disasters Affecting Health Professions Education: Surviving the Storm and After.","authors":"Kin Ly, Susan Ely, Erica Ausel, Andy Brenneman, Steve Garwood, Catherine Gathu, Mark Hernandez, Uzoma Ikonne, Douglas McKell, Akshata R Naik, Rebecca Rowe, Tracey A H Taylor, Thomas Thesen","doi":"10.1007/s40670-025-02633-z","DOIUrl":"https://doi.org/10.1007/s40670-025-02633-z","url":null,"abstract":"","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"477-481"},"PeriodicalIF":1.8,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147623984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-01-08eCollection Date: 2026-02-01DOI: 10.1007/s40670-025-02615-1
Nikola Košćica, Colleen Gillespie, Tyler Webster, Suparna Sarkar, Michael Poles
{"title":"Using a Large Language Model to Extract Information from Student Submitted Free-Text Feedback.","authors":"Nikola Košćica, Colleen Gillespie, Tyler Webster, Suparna Sarkar, Michael Poles","doi":"10.1007/s40670-025-02615-1","DOIUrl":"https://doi.org/10.1007/s40670-025-02615-1","url":null,"abstract":"<p><p>Student feedback is essential to curriculum evaluation. While methods for analyzing quantitative feedback data are readily available and easy to implement, methods for analyzing text-based, qualitative feedback data are less widely available, requiring more time, effort, and expertise. And yet, students' responses to open-ended questions hold great value for curriculum refinement because narrative comments can identify un-anticipated areas of concern that closed-ended rating scales might miss and often provide specific suggestions for improvement. In this paper, we describe efforts to analyze the feasibility and accuracy of using a Large Language Model (ChatGPT 4o) to analyze medical student comments in response to a question asking them to identify basic science topics they found challenging. ChatGPT 4o was used to categorize and summarize students' identification of and explanations for these challenging topics. We describe the specific prompts used to generate and refine results and then conducted a series of experiments to explore consistency, accuracy, and meaningfulness: (1) reviewing the consistency of 10 replications of the ChatGPT 4o request; (2) comparing \"expert\" human ratings of topic categories with ChatGPT's categorization; and (3) comparing \"expert\" human analyses of the explanations for a challenging topic with those generated by ChatGPT. Overall, we found the LLM output to be useful, fairly closely aligned with human experts, and easy to implement. However, results were not perfectly replicated across multiple trials and we found some differences between human and LLM analyses. Our use case is well suited to the current capabilities of genAI models in that summaries can be rapidly and easily generated with sufficient (but not perfect) consistency and accuracy to support continuous quality improvement of basic science curriculum.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s40670-025-02615-1.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"63-72"},"PeriodicalIF":1.8,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043980/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147623965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-01-06eCollection Date: 2026-02-01DOI: 10.1007/s40670-025-02621-3
Yavuz Selim Kıyak, Tuğba İş-Kara, Emre Emekli
{"title":"Applications and Outcomes of Large‑Language‑Model‑Generated Feedback in Undergraduate Medical Education: A Scoping Review.","authors":"Yavuz Selim Kıyak, Tuğba İş-Kara, Emre Emekli","doi":"10.1007/s40670-025-02621-3","DOIUrl":"https://doi.org/10.1007/s40670-025-02621-3","url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) are increasingly integrated into undergraduate medical education, particularly for generating learner feedback. While early LLM studies show promise, their educational impact and usage patterns remain unclear. The objective of this study is to systematically map how LLMs are being used to generate feedback for undergraduate medical students and to examine reported educational outcomes.</p><p><strong>Methods: </strong>A scoping review was conducted following Arksey and O'Malley's framework and reported using PRISMA-ScR. We searched PubMed and Web of Science and identified 4325 records. After screening/review, 42 studies were included. Data were charted using a structured form and outcomes were classified using Kirkpatrick Levels.</p><p><strong>Results: </strong>The 42 included studies originated mostly from Global North countries, with nearly all using OpenAI's GPT models. Feedback was delivered in two main contexts: simulated clinical encounters and text-based assessment tasks. Only 8 studies (19%) used randomized controlled trial designs. Educational outcomes were: 22 studies (52%) included no student data (Level 0); 10 reported student reaction (Level 1); 10 assessed learning gains (Level 2); none addressed behavior change or patient-level effects (Levels 3-4). LLM-generated feedback often matched expert feedback in short-term effectiveness but showed variable accuracy.</p><p><strong>Conclusions: </strong>LLM-generated feedback is being explored across a range of educational settings, showing early signs of feasibility and perceived utility. However, the evidence base is limited in rigor and generalizability. Future research should assess behavioral and patient-level outcomes.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s40670-025-02621-3.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"81-99"},"PeriodicalIF":1.8,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043978/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147623813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-01-05eCollection Date: 2026-02-01DOI: 10.1007/s40670-025-02599-y
Linda Chang, Radhika Sreedhar
{"title":"Integrating AI Literacy into Medical Education: Preparing Future Clinicians for an AI-Driven Healthcare System.","authors":"Linda Chang, Radhika Sreedhar","doi":"10.1007/s40670-025-02599-y","DOIUrl":"https://doi.org/10.1007/s40670-025-02599-y","url":null,"abstract":"<p><p>The rapid integration of artificial intelligence (AI) into healthcare necessitates curricular reforms to prepare future providers. Despite AI in medicine training frameworks from NAM and WHO, most medical students receive minimal AI education. Our institution developed a phased, theory-driven AI in Medicine curriculum guided by connectivism and constructivism. Using stakeholder engagement, needs assessment of 529 students, and backward design, we integrated AI literacy into preclinical, clinical clerkships, and an elective course. This spiral curriculum reinforces application through case studies and hands-on activities, fostering clinical relevance. Our implementation offers a scalable model for integrating AI competencies within the health professions curriculum.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"17-22"},"PeriodicalIF":1.8,"publicationDate":"2026-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147624135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical Science EducatorPub Date : 2026-01-04eCollection Date: 2026-02-01DOI: 10.1007/s40670-025-02623-1
Grace L Park, Gary L Beck Dallaghan, Joe Bradley, Qasim Sikander, Hannah Jung, Kevin Zhang, Gregory Polites, Janet Jokela
{"title":"Generative Artificial Intelligence in Medical Training: Utilization Patterns Across Knowledge, Patient Care, Systems Reasoning, and Innovation.","authors":"Grace L Park, Gary L Beck Dallaghan, Joe Bradley, Qasim Sikander, Hannah Jung, Kevin Zhang, Gregory Polites, Janet Jokela","doi":"10.1007/s40670-025-02623-1","DOIUrl":"https://doi.org/10.1007/s40670-025-02623-1","url":null,"abstract":"<p><p>Since 2022, generative artificial intelligence (AI) use has grown rapidly across many sectors, including medical education. While prior research has explored perceptions of AI, the understanding of AI use amongst medical trainees has been limited. This study surveyed medical trainees to better identify the patterns of generative AI use. Results showed varying patterns based on the phase of training, with ChatGPT emerging as the predominantly used platform across all phases. While awareness of AI policies was limited, the majority reported efforts for responsible use of AI. Implications include understanding of equitable access and onboarding regarding AI use policies.</p>","PeriodicalId":37113,"journal":{"name":"Medical Science Educator","volume":"36 1","pages":"5-10"},"PeriodicalIF":1.8,"publicationDate":"2026-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13043961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147624089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}