The McMaster Narrative Comment Rating Tool: Development and Initial Validity Evidence.

IF 2.1 3区 教育学 Q2 EDUCATION, SCIENTIFIC DISCIPLINES
Teaching and Learning in Medicine Pub Date : 2025-01-01 Epub Date: 2023-11-15 DOI:10.1080/10401334.2023.2276799
Natalie McGuire, Anita Acai, Ranil R Sonnadara
{"title":"The McMaster Narrative Comment Rating Tool: Development and Initial Validity Evidence.","authors":"Natalie McGuire, Anita Acai, Ranil R Sonnadara","doi":"10.1080/10401334.2023.2276799","DOIUrl":null,"url":null,"abstract":"<p><strong>Construct: </strong>The McMaster Narrative Comment Rating Tool aims to capture critical features reflecting the quality of written narrative comments provided in the medical education context: valence/tone of language, degree of correction versus reinforcement, specificity, actionability, and overall usefulness.</p><p><strong>Background: </strong>Despite their role in competency-based medical education, not all narrative comments contribute meaningfully to the development of learners' competence. To develop solutions to mitigate this problem, robust measures of narrative comment quality are needed. While some tools exist, most were created in specialty-specific contexts, have focused on one or two features of feedback, or have focused on faculty perceptions of feedback, excluding learners from the validation process. In this study, we aimed to develop a detailed, broadly applicable narrative comment quality assessment tool that drew upon features of high-quality assessment and feedback and could be used by a variety of raters to inform future research, including applications related to automated analysis of narrative comment quality.</p><p><strong>Approach: </strong>In Phase 1, we used the literature to identify five critical features of feedback. We then developed rating scales for each of the features, and collected 670 competency-based assessments completed by first-year surgical residents in the first six-weeks of training. Residents were from nine different programs at a Canadian institution. In Phase 2, we randomly selected 50 assessments with written feedback from the dataset. Two education researchers used the scale to independently score the written comments and refine the rating tool. In Phase 3, 10 raters, including two medical education researchers, two medical students, two residents, two clinical faculty members, and two laypersons from the community, used the tool to independently and blindly rate written comments from another 50 randomly selected assessments from the dataset. We compared scores between and across rater pairs to assess reliability.</p><p><strong>Findings: </strong>Single and average measures intraclass correlation (ICC) scores ranged from moderate to excellent (ICCs = .51-.83 and .91-.98) across all categories and rater pairs. All tool domains were significantly correlated (<i>p</i>'<i>s</i> <.05), apart from valence, which was only significantly correlated with degree of correction versus reinforcement.</p><p><strong>Conclusion: </strong>Our findings suggest that the McMaster Narrative Comment Rating Tool can reliably be used by multiple raters, across a variety of rater types, and in different surgical contexts. As such, it has the potential to support faculty development initiatives on assessment and feedback, and may be used as a tool to conduct research on different assessment strategies, including automated analysis of narrative comments.</p>","PeriodicalId":51183,"journal":{"name":"Teaching and Learning in Medicine","volume":" ","pages":"86-98"},"PeriodicalIF":2.1000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Teaching and Learning in Medicine","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/10401334.2023.2276799","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/11/15 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"EDUCATION, SCIENTIFIC DISCIPLINES","Score":null,"Total":0}
引用次数: 0

Abstract

Construct: The McMaster Narrative Comment Rating Tool aims to capture critical features reflecting the quality of written narrative comments provided in the medical education context: valence/tone of language, degree of correction versus reinforcement, specificity, actionability, and overall usefulness.

Background: Despite their role in competency-based medical education, not all narrative comments contribute meaningfully to the development of learners' competence. To develop solutions to mitigate this problem, robust measures of narrative comment quality are needed. While some tools exist, most were created in specialty-specific contexts, have focused on one or two features of feedback, or have focused on faculty perceptions of feedback, excluding learners from the validation process. In this study, we aimed to develop a detailed, broadly applicable narrative comment quality assessment tool that drew upon features of high-quality assessment and feedback and could be used by a variety of raters to inform future research, including applications related to automated analysis of narrative comment quality.

Approach: In Phase 1, we used the literature to identify five critical features of feedback. We then developed rating scales for each of the features, and collected 670 competency-based assessments completed by first-year surgical residents in the first six-weeks of training. Residents were from nine different programs at a Canadian institution. In Phase 2, we randomly selected 50 assessments with written feedback from the dataset. Two education researchers used the scale to independently score the written comments and refine the rating tool. In Phase 3, 10 raters, including two medical education researchers, two medical students, two residents, two clinical faculty members, and two laypersons from the community, used the tool to independently and blindly rate written comments from another 50 randomly selected assessments from the dataset. We compared scores between and across rater pairs to assess reliability.

Findings: Single and average measures intraclass correlation (ICC) scores ranged from moderate to excellent (ICCs = .51-.83 and .91-.98) across all categories and rater pairs. All tool domains were significantly correlated (p's <.05), apart from valence, which was only significantly correlated with degree of correction versus reinforcement.

Conclusion: Our findings suggest that the McMaster Narrative Comment Rating Tool can reliably be used by multiple raters, across a variety of rater types, and in different surgical contexts. As such, it has the potential to support faculty development initiatives on assessment and feedback, and may be used as a tool to conduct research on different assessment strategies, including automated analysis of narrative comments.

麦克马斯特叙事评论评价工具:发展和初始效度证据。
构建:麦克马斯特叙事评论评级工具旨在捕捉反映医学教育背景下书面叙事评论质量的关键特征:语言的效价/语气、纠正与强化的程度、特异性、可操作性和总体有用性。背景:尽管叙事评论在能力为本的医学教育中扮演着重要的角色,但并不是所有的叙事评论都对学习者能力的发展有意义。为了开发解决方案来缓解这个问题,需要对叙述评论的质量进行可靠的度量。虽然存在一些工具,但大多数是在特定的环境中创建的,专注于反馈的一两个特征,或者专注于教师对反馈的看法,将学习者排除在验证过程之外。在本研究中,我们旨在开发一种详细的、广泛适用的叙事评论质量评估工具,该工具利用高质量评估和反馈的特点,可被各种评分者用于未来的研究,包括与叙事评论质量自动分析相关的应用。方法:在第一阶段,我们使用文献来确定反馈的五个关键特征。然后,我们为每个特征制定了评分量表,并收集了670份基于能力的评估,这些评估是由第一年外科住院医师在前六周的培训中完成的。住院医生来自加拿大一家机构的九个不同项目。在第二阶段,我们从数据集中随机选择了50个书面反馈评估。两名教育研究人员使用该量表对书面评论进行独立评分,并完善评分工具。在第三阶段,10名评分者,包括两名医学教育研究人员、两名医学生、两名住院医生、两名临床教职员工和两名来自社区的外行人,使用该工具对来自数据集中随机选择的另外50份评估的书面评论进行独立和盲目评分。我们比较评分者对之间和对之间的分数来评估可靠性。结果:单测量和平均测量类内相关性(ICC)评分从中等到优异(ICC = 0.51 -)。83和0.91 - 0.98)。结论:我们的研究结果表明,麦克马斯特叙事评论评分工具可以可靠地用于多个评分者,跨越各种评分者类型,在不同的手术环境中。因此,它有潜力支持教师在评估和反馈方面的发展倡议,并且可以用作对不同评估策略进行研究的工具,包括对叙述性评论的自动分析。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Teaching and Learning in Medicine
Teaching and Learning in Medicine 医学-卫生保健
CiteScore
5.20
自引率
12.00%
发文量
64
审稿时长
6-12 weeks
期刊介绍: Teaching and Learning in Medicine ( TLM) is an international, forum for scholarship on teaching and learning in the health professions. Its international scope reflects the common challenge faced by all medical educators: fostering the development of capable, well-rounded, and continuous learners prepared to practice in a complex, high-stakes, and ever-changing clinical environment. TLM''s contributors and readership comprise behavioral scientists and health care practitioners, signaling the value of integrating diverse perspectives into a comprehensive understanding of learning and performance. The journal seeks to provide the theoretical foundations and practical analysis needed for effective educational decision making in such areas as admissions, instructional design and delivery, performance assessment, remediation, technology-assisted instruction, diversity management, and faculty development, among others. TLM''s scope includes all levels of medical education, from premedical to postgraduate and continuing medical education, with articles published in the following categories:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信