{"title":"A New Method to Balance Measurement Accuracy and Attribute Coverage in Cognitive Diagnostic Computerized Adaptive Testing.","authors":"Xiaojian Sun, Björn Andersson, Tao Xin","doi":"10.1177/01466216211040489","DOIUrl":null,"url":null,"abstract":"<p><p>As one of the important research areas of cognitive diagnosis assessment, cognitive diagnostic computerized adaptive testing (CD-CAT) has received much attention in recent years. Measurement accuracy is the major theme in CD-CAT, and both the item selection method and the attribute coverage have a crucial effect on measurement accuracy. A new attribute coverage index, the ratio of test length to the number of attributes (RTA), is introduced in the current study. RTA is appropriate when the item pool comprises many items that measure multiple attributes where it can both produce acceptable measurement accuracy and balance the attribute coverage. With simulations, the new index is compared to the original item selection method (ORI) and the attribute balance index (ABI), which have been proposed in previous studies. The results show that (1) the RTA method produces comparable measurement accuracy to the ORI method under most item selection methods; (2) the RTA method produces higher measurement accuracy than the ABI method for most item selection methods, with the exception of the mutual information item selection method; (3) the RTA method prefers items that measure multiple attributes, compared to the ORI and ABI methods, while the ABI prefers items that measure a single attribute; and (4) the RTA method performs better than the ORI method with respect to attribute coverage, while it performs worse than the ABI with long tests.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":"45 7-8","pages":"463-476"},"PeriodicalIF":1.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8640349/pdf/10.1177_01466216211040489.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Psychological Measurement","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/01466216211040489","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2021/9/15 0:00:00","PubModel":"Epub","JCR":"Q4","JCRName":"PSYCHOLOGY, MATHEMATICAL","Score":null,"Total":0}
引用次数: 1
Abstract
As one of the important research areas of cognitive diagnosis assessment, cognitive diagnostic computerized adaptive testing (CD-CAT) has received much attention in recent years. Measurement accuracy is the major theme in CD-CAT, and both the item selection method and the attribute coverage have a crucial effect on measurement accuracy. A new attribute coverage index, the ratio of test length to the number of attributes (RTA), is introduced in the current study. RTA is appropriate when the item pool comprises many items that measure multiple attributes where it can both produce acceptable measurement accuracy and balance the attribute coverage. With simulations, the new index is compared to the original item selection method (ORI) and the attribute balance index (ABI), which have been proposed in previous studies. The results show that (1) the RTA method produces comparable measurement accuracy to the ORI method under most item selection methods; (2) the RTA method produces higher measurement accuracy than the ABI method for most item selection methods, with the exception of the mutual information item selection method; (3) the RTA method prefers items that measure multiple attributes, compared to the ORI and ABI methods, while the ABI prefers items that measure a single attribute; and (4) the RTA method performs better than the ORI method with respect to attribute coverage, while it performs worse than the ABI with long tests.
期刊介绍:
Applied Psychological Measurement publishes empirical research on the application of techniques of psychological measurement to substantive problems in all areas of psychology and related disciplines.