Yuting Ning, Ye Liu, Zhenya Huang, Haoyang Bi, Qi Liu, Enhong Chen, Dan Zhang
{"title":"稳定与多样化:计算机化自适应测试的统一方法","authors":"Yuting Ning, Ye Liu, Zhenya Huang, Haoyang Bi, Qi Liu, Enhong Chen, Dan Zhang","doi":"10.1109/CCIS53392.2021.9754532","DOIUrl":null,"url":null,"abstract":"Computerized Adaptive Testing (CAT), aiming to provide personalized tests for each examinee, is an emerging task in the intelligent education field. A CAT system selects questions step by step according to the knowledge states of each examinee, which are estimated by Cognitive Diagnosis Models (CDM). Most existing methods depend on the performance of a single CDM, which is often unstable. Besides, they may select similar questions to generate a test, which to some extent ignores the diversity of selected questions. To this end, in this paper, we propose a novel framework, namely Ensembled Computerized Adaptive Testing (EnCAT). Specifically, EnCAT comprises two components, ensemble part and explore part. In the ensemble part, we ensemble multiple CDMs to determine whether a question is informative, which ensures the stability of CAT process. Then, in the explore part, we learn the question representation from the question content and design a mechanism to quantify the similarity of different questions, which avoids the selection of similar questions and is free of expensive human labeling. Finally, extensive experiments are conducted on a real-world dataset, where the experimental results demonstrate the effectiveness of our proposed EnCAT framework with good performance.","PeriodicalId":191226,"journal":{"name":"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Stable and Diverse: A Unified Approach for Computerized Adaptive Testing\",\"authors\":\"Yuting Ning, Ye Liu, Zhenya Huang, Haoyang Bi, Qi Liu, Enhong Chen, Dan Zhang\",\"doi\":\"10.1109/CCIS53392.2021.9754532\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computerized Adaptive Testing (CAT), aiming to provide personalized tests for each examinee, is an emerging task in the intelligent education field. A CAT system selects questions step by step according to the knowledge states of each examinee, which are estimated by Cognitive Diagnosis Models (CDM). Most existing methods depend on the performance of a single CDM, which is often unstable. Besides, they may select similar questions to generate a test, which to some extent ignores the diversity of selected questions. To this end, in this paper, we propose a novel framework, namely Ensembled Computerized Adaptive Testing (EnCAT). Specifically, EnCAT comprises two components, ensemble part and explore part. In the ensemble part, we ensemble multiple CDMs to determine whether a question is informative, which ensures the stability of CAT process. Then, in the explore part, we learn the question representation from the question content and design a mechanism to quantify the similarity of different questions, which avoids the selection of similar questions and is free of expensive human labeling. Finally, extensive experiments are conducted on a real-world dataset, where the experimental results demonstrate the effectiveness of our proposed EnCAT framework with good performance.\",\"PeriodicalId\":191226,\"journal\":{\"name\":\"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCIS53392.2021.9754532\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 7th International Conference on Cloud Computing and Intelligent Systems (CCIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCIS53392.2021.9754532","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Stable and Diverse: A Unified Approach for Computerized Adaptive Testing
Computerized Adaptive Testing (CAT), aiming to provide personalized tests for each examinee, is an emerging task in the intelligent education field. A CAT system selects questions step by step according to the knowledge states of each examinee, which are estimated by Cognitive Diagnosis Models (CDM). Most existing methods depend on the performance of a single CDM, which is often unstable. Besides, they may select similar questions to generate a test, which to some extent ignores the diversity of selected questions. To this end, in this paper, we propose a novel framework, namely Ensembled Computerized Adaptive Testing (EnCAT). Specifically, EnCAT comprises two components, ensemble part and explore part. In the ensemble part, we ensemble multiple CDMs to determine whether a question is informative, which ensures the stability of CAT process. Then, in the explore part, we learn the question representation from the question content and design a mechanism to quantify the similarity of different questions, which avoids the selection of similar questions and is free of expensive human labeling. Finally, extensive experiments are conducted on a real-world dataset, where the experimental results demonstrate the effectiveness of our proposed EnCAT framework with good performance.