Medical TeacherPub Date : 2025-06-16DOI: 10.1080/0142159X.2025.2515983
Maggie Frej, Janet Skinner, Lorraine Close
{"title":"It's high time for TIME.","authors":"Maggie Frej, Janet Skinner, Lorraine Close","doi":"10.1080/0142159X.2025.2515983","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2515983","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1"},"PeriodicalIF":3.3,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144302466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-06-16DOI: 10.1080/0142159X.2025.2497896
Bingxin Chen, Xinyun Yang, Hui Wang
{"title":"The role of interdisciplinary integration in medical education.","authors":"Bingxin Chen, Xinyun Yang, Hui Wang","doi":"10.1080/0142159X.2025.2497896","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2497896","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1"},"PeriodicalIF":3.3,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144302468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-06-16DOI: 10.1080/0142159X.2025.2517719
Dogus Darici, Lion Sieg, Hendrik Eismann, Jan Karsten
{"title":"Leader-follower dynamics in medical training: A dual mobile eye-tracking analysis of teacher-student gaze patterns.","authors":"Dogus Darici, Lion Sieg, Hendrik Eismann, Jan Karsten","doi":"10.1080/0142159X.2025.2517719","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2517719","url":null,"abstract":"<p><strong>Background: </strong>In medical training, learning typically involves individuals interacting asymmetrically: instructors who explain and demonstrate and learners who follow these instructions. However, there are also moments when learners lead the interaction, for example by pointing out unclear connections. This shifting 'dance of leadership' manifests in measurable patterns of visual attention, whose impact on learning is not well understood.</p><p><strong>Methods: </strong>Using dual mobile eye-tracking methodology, we explored the joint eye movements of 29 teacher-student pairs (<i>mean</i> age = 24 years ± 3; 16 females) during a simulated sonography training in an OR environment. Using diagonal cross-recurrence analysis, we computed the gaze lag time for one person to couple the other's gaze pattern, which we used as a proxy for leader-follower behaviors. Afterward, we quantified the relative frequency of leading behaviors across distinct regions within the training environment and examined their relationship to learning performance metrics.</p><p><strong>Results: </strong>We found that leader-follower behavior varied substantially. Teachers consistently led attention on the sonography monitor, showing tight coupling and minimal variation, reflecting its role as the procedural core. Students more frequently initiated gaze toward anatomical references and during interpersonal interactions. Importantly, only teacher-led guidance toward anatomical references was positively correlated with learning outcomes (<i>r</i> = .50, <i>p</i> < .01).</p><p><strong>Conclusions: </strong>This study reveals that visual leadership during sonography training follows a two-tiered structure: instructor-dominated domains for technical execution and learner-engaged zones for exploration and social interaction. These insights about leader-follower dynamics could be used for targeted analysis and adaptation of clinical teaching situations.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-9"},"PeriodicalIF":3.3,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144302467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-06-13DOI: 10.1080/0142159X.2025.2513426
Ricky Ellis, Andy Knapton, Jane Cannon, Amanda J Lee, Jennifer Cleland
{"title":"A multivariate analysis examining the relationship between sociodemographic differences and UK graduates' performance on postgraduate medical exams.","authors":"Ricky Ellis, Andy Knapton, Jane Cannon, Amanda J Lee, Jennifer Cleland","doi":"10.1080/0142159X.2025.2513426","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2513426","url":null,"abstract":"<p><strong>Background: </strong>Studies examining group-level performance (differential attainment, or DA) in UK postgraduate medical examinations have, to date, focused on a limited number of exams and sociodemographic factors and used relatively simple analyses. This limits understanding of the intersectionality of different characteristics in relation to performance on these critical assessments, required for progression through training and to consultant status. This study aimed to address these gaps by identifying independent predictors of success or failure for UK medical school graduates (UKGs) across UK postgraduate medical examinations.</p><p><strong>Methods: </strong>This retrospective cohort study used multivariate logistic regression to identify independent predictors of success or failure at each examination, accounting for prior academic attainment (at point of entry to medical school). Anonymised pass/fail at the first examination attempt data were extracted from the General Medical Council (GMC) database and analysed for all UKGs examination candidates between 2014 and 2020.</p><p><strong>Results: </strong>Between 2014-2020, 132,370 first examination attempts were made by UKGs, and 99,840 (75.4%) candidates passed at the first attempt. Multivariate analyses revealed that gender, age, ethnicity, religion, sexual orientation, disability, working less than full time and socioeconomic and educational background were all statistically significant independent predictors of success or failure in written and clinical examinations. The strongest independent predictors of failing written and/or clinical examinations were being from a minority ethnic background and having a registered disability.</p><p><strong>Conclusions: </strong>This large-scale study found that, even after accounting for prior academic attainment, there were significant differences in candidate examination pass rates according to key sociodemographic differences. The GMC, Medical Royal Colleges, and postgraduate training organisations now have a responsibility to use these data to guide future research and interventions that aim to reduce these attainment gaps.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-15"},"PeriodicalIF":3.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144285496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-06-13DOI: 10.1080/0142159X.2025.2515988
Ryan Jenkins, Erin Gentry Lamb
{"title":"Learning end-of-life care: Outcome measures of a medical student humanities curriculum.","authors":"Ryan Jenkins, Erin Gentry Lamb","doi":"10.1080/0142159X.2025.2515988","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2515988","url":null,"abstract":"<p><p><b>Purpose</b>: Medical humanities education varies widely and lacks robust outcomes data, grounded partly in disagreement over the appropriateness of quantitative assessment for this topic. End-of-life education likewise lacks standardization, and learners consistently desire improvement. <b>Methods</b>: We created a humanities intervention to teach foundational end-of-life concepts then taught it electively to 42 preclinical second-year medical students (MS2s). All MS2s (<i>n</i> = 182) completed quantitative end-of-life skills assessments, including a novel standardized patient (SP) encounter. Post-encounter measures included the Revised Collett-Lester Fear of Death Scale (CL-FODS), PANAS-X emotional reactivity scales, and student and SP performance assessments; students also completed the CL-FODS longitudinally during the year and gave summative curricular preparedness feedback. <b>Results</b>: Intervention students reported higher death anxiety than controls when measured longitudinally, but lower death anxiety immediately after the SP encounter. SPs assessed intervention students performed worse on jargon use and respect for autonomy versus controls. At end-of-year, intervention students rated their curricular preparedness better than controls. All other measures including other performance skills and the PANAS-X showed no differences. <b>Conclusions</b>: Intervention students showed mixed results on death anxiety suggesting task-specific and cognitive more than affective benefits. These results suggest a need for further refinement of quantitative pedagogical evaluation of humanities curricula.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-8"},"PeriodicalIF":3.3,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144285497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advancing medical education in cervical cancer control with large language models for multiple-choice question generation.","authors":"Mingyang Chen, Jiayi Ma, Xiaoli Cui, Qianling Dai, Haiyan Hu, Yijin Wu, Sulaiya Husaiyin, Aiyuan Wu, Youlin Qiao","doi":"10.1080/0142159X.2025.2513419","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2513419","url":null,"abstract":"<p><strong>Objective: </strong>To explore the feasibility of using large language models (LLMs) to generate multiple-choice questions (MCQs) for cervical cancer control education and compare them with those created by clinicians.</p><p><strong>Methods: </strong>GPT-4o and Baichuan4 generated 40 MCQs each with iteratively refined prompts. Clinicians generated 40 MCQs for comparison. 120 MCQs were evaluated by 12 experts across five dimensions (correctness, clarity and specificity, cognitive level, clinical relevance, explainability) using a 5-point Likert scale. Difficulty and discriminatory power were tested by practitioners. Participants were asked to identify the source of each MCQ.</p><p><strong>Results: </strong>Automated MCQs were similar to clinician-generated ones in most dimensions. However, clinician-generated MCQs had a higher cognitive level (4.00±1.08) than those from GPT-4o (3.68±1.07) and Baichuan4 (3.7±1.13). Testing with 312 practitioners revealed no significant differences in difficulty or discriminatory power among clinicians (59.51±24.50, 0.38±0.14), GPT-4o (61.89±25.36, 0.30±0.19), and Baichuan4 (59.79±26.25, 0.33±0.15). Recognition rates for LLM-generated MCQs ranged from 32% to 50%, with experts outperforming general practitioners in identifying the question setters.</p><p><strong>Conclusions: </strong>LLMs can generate MCQs comparable to clinician-generated ones with engineered prompts, though clinicians outperformed in cognitive level. LLM-assisted MCQ generation could enhance efficiency but requires rigorous validation to ensure educational quality.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-11"},"PeriodicalIF":3.3,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144275262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-06-12DOI: 10.1080/0142159X.2025.2517723
Olivia Ng
{"title":"The paradox of GenAI in assessment: Navigating cost, value, and validity.","authors":"Olivia Ng","doi":"10.1080/0142159X.2025.2517723","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2517723","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1"},"PeriodicalIF":3.3,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144275265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Peer-to-peer mentorship emerges from mandatory research coursework: A social network case study.","authors":"Setthanan Jarukasemkit, Seksan Yoadsanit, Chawisa Teansue, Peerapass Sukkrasanti, Phanuwich Kaewkamjornchai, Borwornsom Leerapan","doi":"10.1080/0142159X.2025.2513425","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2513425","url":null,"abstract":"<p><strong>Purpose: </strong>Research training thrives when pairing coursework with peer-to-peer mentorship. To understand how emerging collaborations promote research productivity of medical students, this study investigates the development of peer-to-peer advice-seeking behaviors and identify social mechanism that fosters collaborations.</p><p><strong>Method: </strong>Cross-sectional surveys on advice-seeking behaviors were collected from 95 medical students awarded research presentation or publication grants from 2016 to 2023. Interrupted time series analysis (ITS) assessed the impact of research coursework, and SNA visualized the advice-seeking patterns and community structure. Path analysis and subgroup analysis identified influential factors that led to grant awarding.</p><p><strong>Results: </strong>ITS showed an increase in grant awarding after the coursework implementation. SNA revealed a shift toward decentralized peer-to-peer advice-seeking behaviors, as group formation mediated grant awarding by 20.41%. Students preferentially seek advice from those at similar educational stages, regardless of gender and research interest. Subgroup analysis revealed advice-seeking differences across genders, educational stages, cohorts, and publication statuses.</p><p><strong>Conclusions: </strong>The network perspective highlights that group formation is a mediator of research productivity. Educators should consider a growing trend towards peer-to-peer mentorship and the influence of institutional policies on student behaviors. Understanding advice-seeking patterns can inform effective strategies to support and enhance undergraduate research engagement.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-10"},"PeriodicalIF":3.3,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144275263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-06-12DOI: 10.1080/0142159X.2025.2513418
Hannah Wu, Daniel Lee, Toby Zerner, Stefan Court-Kowalski, Peter Devitt, Edward Palmer
{"title":"A comparison of the psychometric properties of GPT-4 versus human novice and expert authors of clinically complex MCQs in a mock examination of Australian medical students.","authors":"Hannah Wu, Daniel Lee, Toby Zerner, Stefan Court-Kowalski, Peter Devitt, Edward Palmer","doi":"10.1080/0142159X.2025.2513418","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2513418","url":null,"abstract":"<p><strong>Purpose: </strong>Creating clinically complex Multiple Choice Questions (MCQs) for medical assessment can be time-consuming . Large language models such as GPT-4, a type of generative artificial intelligence (AI), are a potential MCQ design tool. Evaluating the psychometric properties of AI-generated MCQs is essential to ensuring quality.</p><p><strong>Methods: </strong>A 120-item mock examination was constructed, containing 40 human-generated MCQs at novice item-writer level, 40 at expert level, and 40 AI-generated MCQs. int. All examination items underwent panel review to ensure they tested higher order cognitive skills and met a minimum acceptable standard. The online mock examination was administered to Australian medical students, who were blinded to each item's author.</p><p><strong>Results: </strong>234 medical students completed the examination. Analysis showed acceptable reliability (Cronbach's 0.836). There were no differences in item difficulty or discrimination between AI, Novice, and Expert items. The mean item difficulty was 'easy' and mean item discrimination 'fair' across all groups. AI items had lower distractor efficiency (39%) compared to Novice items (55%, <i>p</i> = 0.035), but no difference to Expert items (48%, <i>p</i> = 0.382).</p><p><strong>Conclusions: </strong>The psychometric properties of AI-generated MCQs are comparable to human-generated MCQs at both novice and expert level. Item quality can be improved across all author groups. AI-generated items should undergo human review to enhance distractor efficiency.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-11"},"PeriodicalIF":3.3,"publicationDate":"2025-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144275261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-06-11DOI: 10.1080/0142159X.2025.2517727
Diann S Eley
{"title":"The ethical imperative of civility in medicine.","authors":"Diann S Eley","doi":"10.1080/0142159X.2025.2517727","DOIUrl":"10.1080/0142159X.2025.2517727","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-3"},"PeriodicalIF":3.3,"publicationDate":"2025-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144275264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}