{"title":"面向知识组件识别的众包","authors":"Steven Moore, Huy A. Nguyen, John C. Stamper","doi":"10.1145/3386527.3405940","DOIUrl":null,"url":null,"abstract":"Assigning a set of hypothesized knowledge components (KCs) to assessment items within an ed-tech system enables us to better estimate student learning. However, creating and assigning these KCs is a time-consuming process that often requires domain expertise. In this study, we present the results of crowdsourcing KCs for problems in the domain of mathematics and English writing, as a first step in leveraging the crowd to expedite this task. Crowdworkers were presented with a problem and asked to provide the underlying skills required to solve it. Additionally, we investigated the effect of priming crowdworkers with related content before having them generate these KCs. We then analyzed their contributions through qualitative coding and found that across both the math and writing domains roughly 33% of the crowdsourced KCs directly matched those generated by domain experts for the same problems.","PeriodicalId":20608,"journal":{"name":"Proceedings of the Seventh ACM Conference on Learning @ Scale","volume":"16 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Towards Crowdsourcing the Identification of Knowledge Components\",\"authors\":\"Steven Moore, Huy A. Nguyen, John C. Stamper\",\"doi\":\"10.1145/3386527.3405940\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Assigning a set of hypothesized knowledge components (KCs) to assessment items within an ed-tech system enables us to better estimate student learning. However, creating and assigning these KCs is a time-consuming process that often requires domain expertise. In this study, we present the results of crowdsourcing KCs for problems in the domain of mathematics and English writing, as a first step in leveraging the crowd to expedite this task. Crowdworkers were presented with a problem and asked to provide the underlying skills required to solve it. Additionally, we investigated the effect of priming crowdworkers with related content before having them generate these KCs. We then analyzed their contributions through qualitative coding and found that across both the math and writing domains roughly 33% of the crowdsourced KCs directly matched those generated by domain experts for the same problems.\",\"PeriodicalId\":20608,\"journal\":{\"name\":\"Proceedings of the Seventh ACM Conference on Learning @ Scale\",\"volume\":\"16 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-08-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Seventh ACM Conference on Learning @ Scale\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3386527.3405940\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Seventh ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3386527.3405940","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Towards Crowdsourcing the Identification of Knowledge Components
Assigning a set of hypothesized knowledge components (KCs) to assessment items within an ed-tech system enables us to better estimate student learning. However, creating and assigning these KCs is a time-consuming process that often requires domain expertise. In this study, we present the results of crowdsourcing KCs for problems in the domain of mathematics and English writing, as a first step in leveraging the crowd to expedite this task. Crowdworkers were presented with a problem and asked to provide the underlying skills required to solve it. Additionally, we investigated the effect of priming crowdworkers with related content before having them generate these KCs. We then analyzed their contributions through qualitative coding and found that across both the math and writing domains roughly 33% of the crowdsourced KCs directly matched those generated by domain experts for the same problems.