Evaluating Student Clerkship Performance Using Multiple Assessment Components

PRiMER Pub Date : 2024-04-24 DOI:10.22454/primer.2024.160111
Oladimeji Oki, Zoon Naqvi, William Jordan, Conair E Guilliames, Heather Archer-Dyer, Maria Teresa Santos
{"title":"Evaluating Student Clerkship Performance Using Multiple Assessment Components","authors":"Oladimeji Oki, Zoon Naqvi, William Jordan, Conair E Guilliames, Heather Archer-Dyer, Maria Teresa Santos","doi":"10.22454/primer.2024.160111","DOIUrl":null,"url":null,"abstract":"Introduction: Family medicine clerkships utilize a broad set of objectives. The scope of these objectives cannot be measured by one assessment alone. Using multiple assessments aimed at measuring different objectives may provide more holistic evaluation of students. A further concern is to ensure longitudinal accuracy of assessments. In this study, we sought to better understand the relevance and validity of different assessment tools used in family medicine clerkships.\nMethods: We retrospectively correlated family medicine clerkship students’ scores across different assessments to evaluate the strengths of the correlations, between the different assessment tools. We defined ρ<0.3 as weak, ρ>0.3 to ρ<0.5 as moderate, and ρ>0.5 as high correlation.\nResults: We compared individual assessment scores for 267 students for analysis. The correlation of the clinical evaluation was 0.165 (P<.01); with case-based short-answer questions it was 0.153 (P<.01); and with objective structured clinical examinations it was -0.246 (P<0.01).\nConclusion: Overall low levels of correlations between our assessments are expected, as they are each designed to measure different objectives. The relatively higher correlation between component scores supports convergent validity while correlations closer to zero suggest discriminant validity. Unexpectedly, comparing the multiple-choice questions and objective, structured clinical encounter (OSCE) assessments, we found higher correlation, although we believe these should measure disparate objectives. We replaced our in-house multiple-choice questions with a nationally-standardized exam and preliminary analysis shows the expected weaker correlation with the OSCE assessment, suggesting periodic correlations between assessments may be useful.","PeriodicalId":507541,"journal":{"name":"PRiMER","volume":"99 2","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PRiMER","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.22454/primer.2024.160111","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction: Family medicine clerkships utilize a broad set of objectives. The scope of these objectives cannot be measured by one assessment alone. Using multiple assessments aimed at measuring different objectives may provide more holistic evaluation of students. A further concern is to ensure longitudinal accuracy of assessments. In this study, we sought to better understand the relevance and validity of different assessment tools used in family medicine clerkships. Methods: We retrospectively correlated family medicine clerkship students’ scores across different assessments to evaluate the strengths of the correlations, between the different assessment tools. We defined ρ<0.3 as weak, ρ>0.3 to ρ<0.5 as moderate, and ρ>0.5 as high correlation. Results: We compared individual assessment scores for 267 students for analysis. The correlation of the clinical evaluation was 0.165 (P<.01); with case-based short-answer questions it was 0.153 (P<.01); and with objective structured clinical examinations it was -0.246 (P<0.01). Conclusion: Overall low levels of correlations between our assessments are expected, as they are each designed to measure different objectives. The relatively higher correlation between component scores supports convergent validity while correlations closer to zero suggest discriminant validity. Unexpectedly, comparing the multiple-choice questions and objective, structured clinical encounter (OSCE) assessments, we found higher correlation, although we believe these should measure disparate objectives. We replaced our in-house multiple-choice questions with a nationally-standardized exam and preliminary analysis shows the expected weaker correlation with the OSCE assessment, suggesting periodic correlations between assessments may be useful.
利用多种评估要素评价学生实习表现
导言:全科医学实习采用一系列广泛的目标。这些目标的范围无法仅通过一项评估来衡量。使用多种评估来衡量不同的目标,可以对学生进行更全面的评价。另一个值得关注的问题是确保评估的纵向准确性。在本研究中,我们试图更好地了解在全科医学实习中使用的不同评估工具的相关性和有效性:我们对全科实习学生在不同评估中的得分进行了回顾性关联,以评估不同评估工具之间的关联强度。我们将ρ0.3至ρ0.5定义为高相关性:我们对 267 名学生的个人评估得分进行了比较分析。临床评估的相关性为 0.165 (P<.01);与基于案例的简答题的相关性为 0.153 (P<.01);与客观结构化临床考试的相关性为 -0.246 (P<0.01):我们的评估之间的相关性总体较低是意料之中的,因为它们都是为了测量不同的目标而设计的。各部分分数之间相对较高的相关性支持了聚合效度,而接近零的相关性则表明了区分效度。出乎意料的是,在比较多项选择题和客观、结构化临床实践(OSCE)评估时,我们发现了更高的相关性,尽管我们认为这些评估应衡量不同的目标。我们用全国统一的考试取代了内部的多项选择题,初步分析表明与 OSCE 评估的相关性较弱,这表明评估之间的定期相关性可能是有用的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信