{"title":"绩效考试的评分与分析:方法与解释的比较。","authors":"M E Lunz, R E Schumacker","doi":"","DOIUrl":null,"url":null,"abstract":"<p><p>The purpose of this study was to compare the results and interpretation of the data from a performance examination when four methods of analysis are used. Methods are 1) traditional summary statistics, 2) inter-judge correlations, 3) generalizability theory, and 4) the multi-facet Rasch model. Results indicated that similar sources of variance were identified using each method; however, the multi-facet Rasch model is the only method that linearized the scores and accounts for differences in the particular examination challenged by a candidate before ability estimates are calculated.</p>","PeriodicalId":79673,"journal":{"name":"Journal of outcome measurement","volume":"1 3","pages":"219-38"},"PeriodicalIF":0.0000,"publicationDate":"1997-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Scoring and analysis of performance examinations: a comparison of methods and interpretations.\",\"authors\":\"M E Lunz, R E Schumacker\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The purpose of this study was to compare the results and interpretation of the data from a performance examination when four methods of analysis are used. Methods are 1) traditional summary statistics, 2) inter-judge correlations, 3) generalizability theory, and 4) the multi-facet Rasch model. Results indicated that similar sources of variance were identified using each method; however, the multi-facet Rasch model is the only method that linearized the scores and accounts for differences in the particular examination challenged by a candidate before ability estimates are calculated.</p>\",\"PeriodicalId\":79673,\"journal\":{\"name\":\"Journal of outcome measurement\",\"volume\":\"1 3\",\"pages\":\"219-38\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1997-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of outcome measurement\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of outcome measurement","FirstCategoryId":"1085","ListUrlMain":"","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Scoring and analysis of performance examinations: a comparison of methods and interpretations.
The purpose of this study was to compare the results and interpretation of the data from a performance examination when four methods of analysis are used. Methods are 1) traditional summary statistics, 2) inter-judge correlations, 3) generalizability theory, and 4) the multi-facet Rasch model. Results indicated that similar sources of variance were identified using each method; however, the multi-facet Rasch model is the only method that linearized the scores and accounts for differences in the particular examination challenged by a candidate before ability estimates are calculated.