{"title":"Excellence bias related to rating scales with summative jury assessment","authors":"David Corradi","doi":"10.1080/02602938.2022.2112653","DOIUrl":null,"url":null,"abstract":"Abstract Juries are a high-stake practice in higher education to assess complex competencies. However common, research remains behind in detailing the psychometric qualities of juries, especially when using rubrics or rating scales as an assessment tool. In this study, I analyze a case of a jury assessment (N = 191) of product development where both internal teaching staff and external judges assess and fill in an analytic rating scale. Using polytomous item response theory (IRT) analysis developed for the analysis of heterogeneous juries (i.e. jury response theory or JRT), this study attempts to provide insight into the validity and reliability of the used assessment tool. The results indicate that JRT helps detect unreliable response patterns that indicate an excellence bias, i.e. a tendency not to score in the highest response category. This article concludes with a discussion on how to counter such bias when using rating scales or rubrics for summative assessment.","PeriodicalId":48267,"journal":{"name":"Assessment & Evaluation in Higher Education","volume":null,"pages":null},"PeriodicalIF":4.1000,"publicationDate":"2022-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessment & Evaluation in Higher Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/02602938.2022.2112653","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Abstract Juries are a high-stake practice in higher education to assess complex competencies. However common, research remains behind in detailing the psychometric qualities of juries, especially when using rubrics or rating scales as an assessment tool. In this study, I analyze a case of a jury assessment (N = 191) of product development where both internal teaching staff and external judges assess and fill in an analytic rating scale. Using polytomous item response theory (IRT) analysis developed for the analysis of heterogeneous juries (i.e. jury response theory or JRT), this study attempts to provide insight into the validity and reliability of the used assessment tool. The results indicate that JRT helps detect unreliable response patterns that indicate an excellence bias, i.e. a tendency not to score in the highest response category. This article concludes with a discussion on how to counter such bias when using rating scales or rubrics for summative assessment.