Medical TeacherPub Date : 2025-01-20DOI: 10.1080/0142159X.2024.2430365
Alejandro García-Rudolph, David Sanchez-Pinsach, Mark Andrew Wright, Eloy Opisso, Joan Vidal
{"title":"Assessing readability of explanations and reliability of answers by GPT-3.5 and GPT-4 in non-traumatic spinal cord injury education.","authors":"Alejandro García-Rudolph, David Sanchez-Pinsach, Mark Andrew Wright, Eloy Opisso, Joan Vidal","doi":"10.1080/0142159X.2024.2430365","DOIUrl":"https://doi.org/10.1080/0142159X.2024.2430365","url":null,"abstract":"<p><strong>Purpose: </strong>Our study aimed to: i) Assess the readability of textbook explanations using established indexes; ii) Compare these with GPT-4's default explanations, ensuring similar word counts for direct comparisons; iii) Evaluate GPT-4's adaptability by simplifying high-complexity explanations; iv) Determine the reliability of GPT-3.5 and GPT-4 in providing accurate answers.</p><p><strong>Material and methods: </strong>We utilized a textbook designed for ABPMR certification. Our analysis covered 50 multiple-choice questions, each with a detailed explanation, focusing on non-traumatic spinal cord injury (NTSCI).</p><p><strong>Results: </strong>Our analysis revealed statistically significant differences in readability scores, with the textbook achieving 14.5 (SD = 2.5) compared to GPT-4's 17.3 (SD = 1.9), indicating that GPT-4's explanations are generally more complex (<i>p</i> < 0.001). Using the Flesch Reading Ease Score, 86% of GPT-4's explanations fell into the 'Very difficult' category, significantly higher than the textbook's 58% (<i>p</i> = 0.006). GPT-4 successfully demonstrated adaptability by reducing the mean readability score of the top-nine most complex explanations, maintaining the word count. Regarding reliability, GPT-3.5 and GPT-4 scored 84% and 96% respectively, with GPT-4 outperforming GPT-3.5 (<i>p</i> = 0.046).</p><p><strong>Conclusions: </strong>Our results confirmed GPT-4's potential in medical education by providing highly accurate yet often complex explanations for NTSCI, which were successfully simplified without losing accuracy.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-8"},"PeriodicalIF":3.3,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143008431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-20DOI: 10.1080/0142159X.2025.2451870
Leon Nissen, Johanna Flora Rother, Marie Heinemann, Lara Marie Reimer, Stephan Jonas, Tobias Raupach
{"title":"A randomised cross-over trial assessing the impact of AI-generated individual feedback on written online assignments for medical students.","authors":"Leon Nissen, Johanna Flora Rother, Marie Heinemann, Lara Marie Reimer, Stephan Jonas, Tobias Raupach","doi":"10.1080/0142159X.2025.2451870","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2451870","url":null,"abstract":"<p><strong>Purpose: </strong>Self-testing has been proven to significantly improve not only simple learning outcomes, but also higher-order skills such as clinical reasoning in medical students. Previous studies have shown that self-testing was especially beneficial when it was presented with feedback, which leaves the question whether an immediate and personalized feedback further encourages this effect. Therefore, we hypothesised that individual feedback has a greater effect on learning outcomes, compared to generic feedback.</p><p><strong>Materials and methods: </strong>In a randomised cross-over trial, German medical students were invited to voluntarily answer daily key-feature questions <i>via</i> an App. For half of the items they received a generalised feedback by an expert, while the feedback on the other half was generated immediately through ChatGPT. After the intervention, the students participated in a mandatory exit exam.</p><p><strong>Results: </strong>Those participants who used the app more frequently experienced a better learning outcome compared to those who did not use it frequently, even though this finding was only examined in a correlative nature. The individual ChatGPT generated feedback did not show a greater effect on exit exam scores compared to the expert comment (51.8 ± 22.0% vs. 55.8 ± 22.8%; <i>p</i> = 0.06).</p><p><strong>Conclusion: </strong>This study proves the concept of providing personalised feedback on medical questions. Despite the promising results, improved prompting and further development of the application seems necessary to strengthen the possible impact of the personalised feedback. Our study closes a research gap and holds great potential for further use not only in medicine but also in other academic fields.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-7"},"PeriodicalIF":3.3,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143008429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-16DOI: 10.1080/0142159X.2025.2452962
Supianto
{"title":"Balancing innovation and tradition: A critical reflection on the assessment PROFILE framework.","authors":"Supianto","doi":"10.1080/0142159X.2025.2452962","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2452962","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1"},"PeriodicalIF":3.3,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143008432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-11DOI: 10.1080/0142159X.2024.2431409
Daniel J Minter, Annabel K Frank, Logan Pierce, Brian Schwartz, Sirisha Narayana
{"title":"Response to: 'A confidentiality conundrum: Case tracking for medical education'.","authors":"Daniel J Minter, Annabel K Frank, Logan Pierce, Brian Schwartz, Sirisha Narayana","doi":"10.1080/0142159X.2024.2431409","DOIUrl":"https://doi.org/10.1080/0142159X.2024.2431409","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-2"},"PeriodicalIF":3.3,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142965898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-11DOI: 10.1080/0142159X.2024.2431407
Daniel J Minter, Annabel K Frank, Logan Pierce, Brian Schwartz, Sirisha Narayana
{"title":"Response to: 'Is the practice of case-tracking a substitute for traditional feedback?'","authors":"Daniel J Minter, Annabel K Frank, Logan Pierce, Brian Schwartz, Sirisha Narayana","doi":"10.1080/0142159X.2024.2431407","DOIUrl":"https://doi.org/10.1080/0142159X.2024.2431407","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-2"},"PeriodicalIF":3.3,"publicationDate":"2025-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142965951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-10DOI: 10.1080/0142159X.2024.2445058
Jennifer Benjamin, Ken Masters, Anoop Agrawal, Heather MacNeill, Neil Mehta
{"title":"Twelve tips on applying AI tools in HPE scholarship using Boyer's model.","authors":"Jennifer Benjamin, Ken Masters, Anoop Agrawal, Heather MacNeill, Neil Mehta","doi":"10.1080/0142159X.2024.2445058","DOIUrl":"10.1080/0142159X.2024.2445058","url":null,"abstract":"<p><p>AI has changed the landscape of health professions education. With the hype now behind us, we find ourselves in the phase of reckoning, considering what's next; where do we start and how can educators use these powerful tools for daily teaching and learning. We recognize the great need for training to use AI meaningfully for education. Boyer's model of scholarship provides a pedagogical approach for teaching with AI and how to maximize these efforts towards scholarship. By offering practical solutions and demonstrating their usefulness, this Twelve tips article demonstrates how to apply AI towards scholarship by leveraging the capabilities of the tools. Despite their potential, our recommendation is to exercise caution against AI dependency and to role model responsible use of AI by evaluating AI outputs critically with a commitment to accuracy and scrutinize for hallucinations and false citations.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-6"},"PeriodicalIF":3.3,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142951365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-10DOI: 10.1080/0142159X.2024.2445045
Abigail Konopasky, Gabrielle M Finn, Lara Varpio
{"title":"Moving Beyond Static, Individualistic Approaches to Agency: Theories of Agency for Medical Education Researchers: AMEE Guide No. 177.","authors":"Abigail Konopasky, Gabrielle M Finn, Lara Varpio","doi":"10.1080/0142159X.2024.2445045","DOIUrl":"https://doi.org/10.1080/0142159X.2024.2445045","url":null,"abstract":"<p><p>Agency - the capacity to produce an effect - is a foundational aspect of medical education. Agency is usually conceptualized at the level of the <i>individual</i>, with each learner charged with taking responsibility to pull themselves up by their bootstraps. This conceptualization is problematic. First, collaboration is a central component of patient care, which does not align well with an individualistic approach. Second, a growing body of literature documents how minoritized and marginalized trainees experience inequitable restrictions on their agency. Third, a myriad of structures across medicine restricts individual agency. In this guide, we present four conceptualizations of agency beyond the individual that medical researchers can incorporate to modernize and broaden their understanding of agency: (a) temporal: how individuals wrestle with their own agency across time; (b) relational: how agency is co-created dialogically with other individuals and structures; (c) cultural: how culture and cultural resources shape possibilities for agency; and (d) structural: how restrictive structures - like racism and ableism that unjustly curtail individual agency - are created, maintained, and resisted. For each dimension, we first describe it by drawing from and summarizing the work of theorists across disciplines. Next, we highlight an article from medical education that makes particularly good use of this dimension, discussing some of its relevant findings. Finally, we offer a set of questions that researchers in medical education can ask to highlight the dimension of agency in their work, and we suggest potential directions for future inquiry. We conclude by offering an example of how a researcher might understand a resident's educational experiences through each of the four proposed dimensions and further explicating the complexity of agency in medical education.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-10"},"PeriodicalIF":3.3,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142951363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-09DOI: 10.1080/0142159X.2024.2445043
Neil Mehta, Craig Nielsen, Amy Zack, Terri Christensen, J H Isaacson
{"title":"Creating custom GPTs for faculty development: An example using the Johari Window and Crucial Conversation frameworks for providing feedback to struggling students.","authors":"Neil Mehta, Craig Nielsen, Amy Zack, Terri Christensen, J H Isaacson","doi":"10.1080/0142159X.2024.2445043","DOIUrl":"https://doi.org/10.1080/0142159X.2024.2445043","url":null,"abstract":"<p><p>Feedback plays a crucial role in the growth and development of trainees, particularly when addressing areas needing improvement. However, faculty members often struggle to deliver constructive feedback, particularly when discussing underperformance. A key obstacle is the lack of comfort many faculty experience in providing feedback that fosters growth. Traditional faculty development programs designed to address these challenges can be expensive and too time-intensive, for busy clinicians.. Generative AI, specifically custom GPT models simulating virtual students and coaches, offers a promising solution for faculty development in feedback training. These AI-driven tools can simulate realistic feedback scenarios using widely accepted educational frameworks and coach faculty members on best practices in delivering constructive feedback. Through interactive, low-cost, and accessible virtual simulations, faculty members can practice in a safe environment and receive immediate, tailored coaching. This approach enhances faculty confidence and competence while reducing the logistical and financial constraints of traditional faculty development programs. By providing scalable, on-demand training, custom GPT-based simulations can be seamlessly integrated into clinical environments, fostering a supportive feedback culture prioritizing trainee development. This paper describes the stepwise process of design and implementation, of a custom GPT-powered feedback training based on an accepted framework. This process can has the potential to transform faculty development in medical education.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-3"},"PeriodicalIF":3.3,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142951216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Medical TeacherPub Date : 2025-01-09DOI: 10.1080/0142159X.2024.2445037
Ken Masters, Heather MacNeil, Jennifer Benjamin, Tamara Carver, Kataryna Nemethy, Sofia Valanci-Aroesty, David C M Taylor, Brent Thoma, Thomas Thesen
{"title":"Artificial Intelligence in Health Professions Education assessment: AMEE Guide No. 178.","authors":"Ken Masters, Heather MacNeil, Jennifer Benjamin, Tamara Carver, Kataryna Nemethy, Sofia Valanci-Aroesty, David C M Taylor, Brent Thoma, Thomas Thesen","doi":"10.1080/0142159X.2024.2445037","DOIUrl":"https://doi.org/10.1080/0142159X.2024.2445037","url":null,"abstract":"<p><p>Health Professions Education (HPE) assessment is being increasingly impacted by Artificial Intelligence (AI), and institutions, educators, and learners are grappling with AI's ever-evolving complexities, dangers, and potential. This AMEE Guide aims to assist all HPE stakeholders by helping them navigate the assessment uncertainty before them. Although the impetus is AI, the Guide grounds its path in pedagogical theory, considers the range of human responses, and then deals with assessment types, challenges, AI roles as tutor and learner, and required competencies. It then discusses the difficult and ethical issues, before ending with considerations for faculty development and the technicalities of AI acknowledgment in assessment. Through this Guide, we aim to allay fears in the face of change and demonstrate possibilities that will allow educators and learners to harness the full potential of AI in HPE assessment.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1-15"},"PeriodicalIF":3.3,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142951242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}