{"title":"An Application of the Partial Credit IRT Model in Identifying Benchmarks for Polytomous Rating Scale Instruments.","authors":"Enis Dogan","doi":"10.7275/1cf3-aq56","DOIUrl":null,"url":null,"abstract":"Several large scale assessments include student, teacher, and school background questionnaires. Results from such questionnaires can be reported for each item separately, or as indices based on aggregation of multiple items into a scale. Interpreting scale scores is not always an easy task though. In disseminating results of achievement tests, one solution to this conundrum is to identify cut scores on the reporting scale in order to divide it into achievement levels that correspond to distinct knowledge and skill profiles. This allows for the reporting of the percentage of students at each achievement level in addition to average scale scores. Dividing a scale into meaningful segments can, and perhaps should, be done to enrich interpretability of scales based on questionnaire items as well. This article illustrates an approach based on an application of Item Response Theory (IRT) to accomplish this. The application is demonstrated with a polytomous rating scale instrument designed to measure students’ sense of school belonging.","PeriodicalId":20361,"journal":{"name":"Practical Assessment, Research and Evaluation","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2018-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Practical Assessment, Research and Evaluation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7275/1cf3-aq56","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 2
Abstract
Several large scale assessments include student, teacher, and school background questionnaires. Results from such questionnaires can be reported for each item separately, or as indices based on aggregation of multiple items into a scale. Interpreting scale scores is not always an easy task though. In disseminating results of achievement tests, one solution to this conundrum is to identify cut scores on the reporting scale in order to divide it into achievement levels that correspond to distinct knowledge and skill profiles. This allows for the reporting of the percentage of students at each achievement level in addition to average scale scores. Dividing a scale into meaningful segments can, and perhaps should, be done to enrich interpretability of scales based on questionnaire items as well. This article illustrates an approach based on an application of Item Response Theory (IRT) to accomplish this. The application is demonstrated with a polytomous rating scale instrument designed to measure students’ sense of school belonging.