{"title":"Effect of moderation on rubric criteria for inter-rater reliability in an objective structured clinical examination with real patients.","authors":"Tetsuro Watari, Soichiro Koyama, Yusaku Kato, Yonho Paku, Yoshikiyo Kanada, Hiroaki Sakurai","doi":"10.20407/fmj.2021-010","DOIUrl":null,"url":null,"abstract":"<p><p>Objectives: Objective structured clinical examinations (OSCEs) are used to assess clinical competence in medical education. Evaluations using video-recorded OSCEs are effective in reducing costs in terms of time and human resources. To improve inter-rater reliability, these evaluations undergo moderation in the form of a discussion between the raters to obtain consistency in grading according to the rubric criteria. We examined the effect of moderation related to the rubric criteria on the inter-rater reliability of a video-recorded OSCE with real patients.</p><p><strong>Methods: </strong>Forty OSCE videos in which students performed range-of-motion tests at shoulder abduction on real patients were assessed by two raters. The two raters scored videos 1 to 10 without moderation and videos 11 to 40 with moderation each time. The inter-rater reliability of the OSCE was calculated using the weighted kappa coefficient.</p><p><strong>Results: </strong>The mean scores of the weighted kappa coefficients were 0.49 for videos 1 to 10, 0.57 for videos 11 to 20, 0.66 for videos 21 to 30, and 0.82 for videos 31 to 40.</p><p><strong>Conclusions: </strong>An assessment of video-recorded OSCEs was conducted with real patients in a real clinical setting. Repeated moderation improved the inter-rater reliability. This study suggests the effectiveness of moderation in OSCEs with real patients.</p>","PeriodicalId":33657,"journal":{"name":"Fujita Medical Journal","volume":"8 3","pages":"83-87"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9358671/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fujita Medical Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.20407/fmj.2021-010","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/11/25 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives: Objective structured clinical examinations (OSCEs) are used to assess clinical competence in medical education. Evaluations using video-recorded OSCEs are effective in reducing costs in terms of time and human resources. To improve inter-rater reliability, these evaluations undergo moderation in the form of a discussion between the raters to obtain consistency in grading according to the rubric criteria. We examined the effect of moderation related to the rubric criteria on the inter-rater reliability of a video-recorded OSCE with real patients.
Methods: Forty OSCE videos in which students performed range-of-motion tests at shoulder abduction on real patients were assessed by two raters. The two raters scored videos 1 to 10 without moderation and videos 11 to 40 with moderation each time. The inter-rater reliability of the OSCE was calculated using the weighted kappa coefficient.
Results: The mean scores of the weighted kappa coefficients were 0.49 for videos 1 to 10, 0.57 for videos 11 to 20, 0.66 for videos 21 to 30, and 0.82 for videos 31 to 40.
Conclusions: An assessment of video-recorded OSCEs was conducted with real patients in a real clinical setting. Repeated moderation improved the inter-rater reliability. This study suggests the effectiveness of moderation in OSCEs with real patients.