Medical Teacher最新文献

筛选
英文 中文
Optimizing interprofessional collaboration: Enhancing nurse -physician trainee dynamics. 优化专业间合作:加强实习护士与实习医师之间的互动。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-02-28 DOI: 10.1080/0142159X.2025.2472798
Haiying Zhu, Zhenliang Sun
{"title":"Optimizing interprofessional collaboration: Enhancing nurse -physician trainee dynamics.","authors":"Haiying Zhu, Zhenliang Sun","doi":"10.1080/0142159X.2025.2472798","DOIUrl":"10.1080/0142159X.2025.2472798","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1694"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143522825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing AI as members of script concordance test expert reference panel: A comparative analysis. 介绍人工智能作为文字一致性测试专家参考小组的成员:比较分析。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-03-08 DOI: 10.1080/0142159X.2025.2473620
Moataz A Sallam, Enjy Abouzeid
{"title":"Introducing AI as members of script concordance test expert reference panel: A comparative analysis.","authors":"Moataz A Sallam, Enjy Abouzeid","doi":"10.1080/0142159X.2025.2473620","DOIUrl":"10.1080/0142159X.2025.2473620","url":null,"abstract":"<p><strong>Background: </strong>The Script Concordance Test (SCT) is increasingly used in professional development to assess clinical reasoning, with linear progression in SCT performance observed as clinical experience increases. One challenge in implementing SCT is the potential burnout of expert reference panel (ERP) members. To address this, we introduced ChatGPT as panel members. The aim was to enhance the efficiency of SCT creation while maintaining educational content quality and to explore the effectiveness of different models as reference panels.</p><p><strong>Methodology: </strong>A quasi-experimental comparative design was employed, involving all undergraduate medical students and faculty members enrolled in the Ophthalmology clerkship. Two groups involved Traditional ERP which consisted of 15 experts, diversified in clinical experience: 5 senior residents, 5 lecturers, and 5 professors and AI-Generated ERP which is a panel generated using ChatGPT and o1 preview, designed to mirror diverse clinical opinions based on varying experience levels.</p><p><strong>Results: </strong>Experts consistently achieved the highest mean scores across most vignettes, with ChatGPT-4 and o1 scores generally slightly lower. Notably, the o1 mean scores were closer to those of experts compared to ChatGPT-4. Significant differences were observed between ChatGPT-4 and o1 scores in certain vignettes. These values indicate a strong level of consistency, suggesting that both experts and AI models provided highly reliable ratings.</p><p><strong>Conclusion: </strong>These findings suggest that while AI models cannot replace human experts, they can be effectively used to train students, enhance reasoning skills, and help narrow the gap between student and expert performance.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1637-1644"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143586315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing medical knowledge: A 3-year comparative study of very short answer vs. multiple choice questions. 评估医学知识:一项为期3年的简答题与多项选择题的比较研究。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-04-28 DOI: 10.1080/0142159X.2025.2496382
Harry G Potter, John C McLachlan
{"title":"Assessing medical knowledge: A 3-year comparative study of very short answer vs. multiple choice questions.","authors":"Harry G Potter, John C McLachlan","doi":"10.1080/0142159X.2025.2496382","DOIUrl":"10.1080/0142159X.2025.2496382","url":null,"abstract":"<p><strong>Purpose: </strong>Assessment design significantly influences evaluation of student learning. Multiple choice questions (MCQ) and very short answer questions (VSAQ) are commonly used assessment formats, especially in high-stakes settings like medical education. MCQs are favoured for efficiency, coverage, and reliability but may lack depth in assessing critical thinking. VSAQs require students to generate responses, potentially enhancing depth, but posing challenges in consistency and subjective interpretation.</p><p><strong>Methods: </strong>Data from parallel MCQ/VSAQ exams over three years was collected. Summary statistics for each exam (marks, time, and discrimination index; DI) and the effect of year and question characteristics were analysed.</p><p><strong>Results: </strong>VSAQs were associated with lower marks (<i>p</i> < 0.001), longer time (<i>p</i> < 0.001), and higher DI (<i>p</i> < 0.001). Question characteristics (e.g. basic science or clinical stems) significantly affected the mark, time, and DI, changing across years, but not interacting with question format.</p><p><strong>Conclusion: </strong>While MCQs resulted in higher marks, VSAQs provided higher discrimination of student performance. Response options in MCQs likely enhance recall, however real-world settings also offer contextual cues. Question characteristics affect student performance independently of format, likely due to differences in cohort career progression. Future research should investigate predictive validity and standard setting of VSAQs in a basic science context.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1669-1677"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143990202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Click, create, critique: Futureproofing critical thinking in the age of generative AI. 点击、创造、批判:生成式人工智能时代的前瞻性批判性思维。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 DOI: 10.1080/0142159X.2025.2566267
Jocelyne Velupillai, Stephen Waite
{"title":"Click, create, critique: Futureproofing critical thinking in the age of generative AI.","authors":"Jocelyne Velupillai, Stephen Waite","doi":"10.1080/0142159X.2025.2566267","DOIUrl":"https://doi.org/10.1080/0142159X.2025.2566267","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145200197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Going beyond hawks and doves - Measuring degrees of examiner misalignment in OSCEs. 超越鹰派和鸽派——欧安组织中审查员偏差程度的测量。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-02-05 DOI: 10.1080/0142159X.2025.2461561
Matt Homer
{"title":"Going beyond hawks and doves - Measuring degrees of examiner misalignment in OSCEs.","authors":"Matt Homer","doi":"10.1080/0142159X.2025.2461561","DOIUrl":"10.1080/0142159X.2025.2461561","url":null,"abstract":"<p><strong>Background: </strong>Minimising examiner differences in scoring in OSCEs is key in supporting the validity of the assessment outcomes. This is particularly true for common OSCE designs where the same station is administered across parallel circuits, with examiners nested within these. However, the common classification of extreme examiners as 'hawks' or 'doves' can be overly simplistic. Rather, it is the difference in patterns of scoring across circuits that better indicates poor levels of agreement between examiners that can unfairly advantage particular groups of candidates in comparison with others in different circuits.</p><p><strong>Methods and materials: </strong>In this paper, a new measure of differences in examiner scoring is presented that quantifies the different combined patterns of scoring in global grades and checklist/domain scores for pairs of examiners assessing in the same station but in different circuits. Based on calculating the area between separate examiners' individual borderline regression lines, this measure can be used as a <i>post hoc</i> metric to provide a broad range of validity evidence for the assessment and its outcomes.</p><p><strong>Results and conclusions: </strong>In challenging the 'hawks'/'doves' paradigm, this work presents a detailed empirical analysis of a new misalignment metric in a particular high-stakes context and gives a range of evidence of its contribution to overall OSCE quality control processes and of improved fairness to candidates over time. The paper concludes with comments on developing the metric to contexts where there are multiple parallel circuits which will allows its practical application to a broader set of OSCE contexts.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1630-1636"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The role of internationalization of medical education via international exchange programs in professional identity formation. 国际交流项目对医学教育国际化在职业认同形成中的作用。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-03-18 DOI: 10.1080/0142159X.2025.2478873
Jason Luong, Jessica Hui, Geoffroy Noel, Anette Wu
{"title":"The role of internationalization of medical education via international exchange programs in professional identity formation.","authors":"Jason Luong, Jessica Hui, Geoffroy Noel, Anette Wu","doi":"10.1080/0142159X.2025.2478873","DOIUrl":"10.1080/0142159X.2025.2478873","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1694-1695"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143657844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the value of AI-generated questions for USMLE step 1 preparation: A study using ChatGPT-3.5. 评估人工智能生成的问题对USMLE第一步准备的价值:使用ChatGPT-3.5的研究。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-03-27 DOI: 10.1080/0142159X.2025.2478872
Alan Balu, Stefan T Prvulovic, Claudia Fernandez Perez, Alexander Kim, Daniel A Donoho, Gregory Keating
{"title":"Evaluating the value of AI-generated questions for USMLE step 1 preparation: A study using ChatGPT-3.5.","authors":"Alan Balu, Stefan T Prvulovic, Claudia Fernandez Perez, Alexander Kim, Daniel A Donoho, Gregory Keating","doi":"10.1080/0142159X.2025.2478872","DOIUrl":"10.1080/0142159X.2025.2478872","url":null,"abstract":"<p><strong>Purpose: </strong>Students are increasingly relying on artificial intelligence (AI) for medical education and exam preparation. However, the factual accuracy and content distribution of AI-generated exam questions for self-assessment have not been systematically investigated.</p><p><strong>Methods: </strong>Curated prompts were created to generate multiple-choice questions matching the USMLE Step 1 examination style. We utilized ChatGPT-3.5 to generate 50 questions and answers based upon each prompt style. We manually examined output for factual accuracy, Bloom's Taxonomy, and category within the USMLE Step 1 content outline.</p><p><strong>Results: </strong>ChatGPT-3.5 generated 150 multiple-choice case-style questions and selected an answer. Overall, 83% of generated multiple questions had no factual inaccuracies and 15% contained one to two factual inaccuracies. With simple prompting, common themes included deep venous thrombosis, myocardial infarction, and thyroid disease. Topic diversity improved by separating content topic generation from question generation, and specificity to Step 1 increased by indicating that \"treatment\" questions were not desired.</p><p><strong>Conclusion: </strong>We demonstrate that ChatGPT-3.5 can successfully generate Step 1 style questions with reasonable factual accuracy, and this method may be used by medical students preparing for USMLE examinations. While AI-generated questions demonstrated adequate factual accuracy, targeted prompting techniques should be used to overcome ChatGPT's bias towards particular medical conditions.</p>","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1645-1653"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143730609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Let's consider what writing is good for before we hand it over to AI. 在把写作交给人工智能之前,让我们先考虑一下写作有什么好处。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-07-22 DOI: 10.1080/0142159X.2025.2523464
Lorelei Lingard
{"title":"Let's consider what writing is good for before we hand it over to AI.","authors":"Lorelei Lingard","doi":"10.1080/0142159X.2025.2523464","DOIUrl":"10.1080/0142159X.2025.2523464","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1571-1572"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144690948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What do we become? Artificial intelligence and academic identity. 我们变成了什么?人工智能和学术身份。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-07-22 DOI: 10.1080/0142159X.2025.2523468
Rachel H Ellaway
{"title":"What do we become? Artificial intelligence and academic identity.","authors":"Rachel H Ellaway","doi":"10.1080/0142159X.2025.2523468","DOIUrl":"10.1080/0142159X.2025.2523468","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1569-1570"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144690952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introducing a new occasional series for Medical Teacher: Colloquy. 为医学教师介绍一个新的不定期系列:对话。
IF 3.3 2区 教育学
Medical Teacher Pub Date : 2025-10-01 Epub Date: 2025-07-22 DOI: 10.1080/0142159X.2025.2523687
Jennifer Cleland
{"title":"Introducing a new occasional series for Medical Teacher: Colloquy.","authors":"Jennifer Cleland","doi":"10.1080/0142159X.2025.2523687","DOIUrl":"10.1080/0142159X.2025.2523687","url":null,"abstract":"","PeriodicalId":18643,"journal":{"name":"Medical Teacher","volume":" ","pages":"1561-1562"},"PeriodicalIF":3.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144690947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信