Rachel Van Campenhout, Noam Brown, Bill Jerome, Jeffrey S. Dittel, Benny G. Johnson
{"title":"走向有效的大规模课件:调查自动生成的问题作为形成性实践","authors":"Rachel Van Campenhout, Noam Brown, Bill Jerome, Jeffrey S. Dittel, Benny G. Johnson","doi":"10.1145/3430895.3460162","DOIUrl":null,"url":null,"abstract":"Courseware is a comprehensive learning environment that engages students in a learning by doing approach while also giving instructors data-driven insights on their class, providing a scalable solution for many instructional models. However, courseware-and the volume of formative questions required to make it effective-is time-consuming and expensive to create. By using artificial intelligence for automatic question generation, we can reduce the time and cost of developing formative questions in courseware. However, it is critical that automatically generated (AG) questions have a level of quality on par with human-authored (HA) questions in order to be confident in their usage at scale. Therefore, our research question is: are student interactions with AG questions equivalent to HA questions with respect to engagement, difficulty, and persistence metrics? This paper evaluates data for AG and HA questions that students used as formative practice in their university Communication course. Analysis of AG and HA questions shows that our first generation of AG questions perform equally well as HA questions in multiple important respects.","PeriodicalId":125581,"journal":{"name":"Proceedings of the Eighth ACM Conference on Learning @ Scale","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Toward Effective Courseware at Scale: Investigating Automatically Generated Questions as Formative Practice\",\"authors\":\"Rachel Van Campenhout, Noam Brown, Bill Jerome, Jeffrey S. Dittel, Benny G. Johnson\",\"doi\":\"10.1145/3430895.3460162\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Courseware is a comprehensive learning environment that engages students in a learning by doing approach while also giving instructors data-driven insights on their class, providing a scalable solution for many instructional models. However, courseware-and the volume of formative questions required to make it effective-is time-consuming and expensive to create. By using artificial intelligence for automatic question generation, we can reduce the time and cost of developing formative questions in courseware. However, it is critical that automatically generated (AG) questions have a level of quality on par with human-authored (HA) questions in order to be confident in their usage at scale. Therefore, our research question is: are student interactions with AG questions equivalent to HA questions with respect to engagement, difficulty, and persistence metrics? This paper evaluates data for AG and HA questions that students used as formative practice in their university Communication course. Analysis of AG and HA questions shows that our first generation of AG questions perform equally well as HA questions in multiple important respects.\",\"PeriodicalId\":125581,\"journal\":{\"name\":\"Proceedings of the Eighth ACM Conference on Learning @ Scale\",\"volume\":\"103 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the Eighth ACM Conference on Learning @ Scale\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3430895.3460162\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Eighth ACM Conference on Learning @ Scale","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3430895.3460162","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Toward Effective Courseware at Scale: Investigating Automatically Generated Questions as Formative Practice
Courseware is a comprehensive learning environment that engages students in a learning by doing approach while also giving instructors data-driven insights on their class, providing a scalable solution for many instructional models. However, courseware-and the volume of formative questions required to make it effective-is time-consuming and expensive to create. By using artificial intelligence for automatic question generation, we can reduce the time and cost of developing formative questions in courseware. However, it is critical that automatically generated (AG) questions have a level of quality on par with human-authored (HA) questions in order to be confident in their usage at scale. Therefore, our research question is: are student interactions with AG questions equivalent to HA questions with respect to engagement, difficulty, and persistence metrics? This paper evaluates data for AG and HA questions that students used as formative practice in their university Communication course. Analysis of AG and HA questions shows that our first generation of AG questions perform equally well as HA questions in multiple important respects.