{"title":"Rater Training and Assessment of Student Clinical Judgment: An Experimental Inquiry.","authors":"Deborah A Halliday, Barbara J Patterson","doi":"10.1097/01.NEP.0000000000001399","DOIUrl":null,"url":null,"abstract":"<p><strong>Aim: </strong>This study used the Lasater Clinical Judgment Rubric (LCJR) to determine if rater training influenced interrater reliability.</p><p><strong>Background: </strong>There is a call for nurse educators to provide increased rigor in performance evaluation of students' clinical judgments prior to graduation. However, evaluating student clinical performance is challenging, and there is limited research on best practices for training raters on student performance.</p><p><strong>Method: </strong>An experimental pre- and posttest comparative study was conducted with a convenience sample of 34 nurse educators.</p><p><strong>Results: </strong>Orientation to the evaluation instrument (LCJR) and two training sessions were minimally effective in improving interrater reliability compared to the expert rater benchmark range. Rater decay occurred with both the intervention and control groups.</p><p><strong>Conclusion: </strong>Rater training is crucial to ensure fair and consistent student clinical judgment assessments. The findings suggest that a team of expert raters should define and set rating benchmarks prior to student assessments.</p>","PeriodicalId":47651,"journal":{"name":"Nursing Education Perspectives","volume":" ","pages":""},"PeriodicalIF":0.9000,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nursing Education Perspectives","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1097/01.NEP.0000000000001399","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Aim: This study used the Lasater Clinical Judgment Rubric (LCJR) to determine if rater training influenced interrater reliability.
Background: There is a call for nurse educators to provide increased rigor in performance evaluation of students' clinical judgments prior to graduation. However, evaluating student clinical performance is challenging, and there is limited research on best practices for training raters on student performance.
Method: An experimental pre- and posttest comparative study was conducted with a convenience sample of 34 nurse educators.
Results: Orientation to the evaluation instrument (LCJR) and two training sessions were minimally effective in improving interrater reliability compared to the expert rater benchmark range. Rater decay occurred with both the intervention and control groups.
Conclusion: Rater training is crucial to ensure fair and consistent student clinical judgment assessments. The findings suggest that a team of expert raters should define and set rating benchmarks prior to student assessments.
期刊介绍:
A publication of the National League for Nursing, Nursing Education Perspectives is a peer-reviewed, bimonthly journal that provides evidence for best practices in nursing education. Through the publication of rigorously designed studies, the journal contributes to the advancement of the science of nursing education. It serves as a forum for research and innovation regarding teaching and learning, curricula, technology, and other issues important to nursing education. Today, as nurse educators strive to advance research in nursing education and break away from established patterns and chart new pathways in nursing education, Nursing Education Perspectives is a vital resource. Nursing Education Perspectives is housed in the NLN Chamberlain College of Nursing for the Advancement of the Science of Nursing Education.