Aayush Shah, Alan Lee, Chris Chi, Ruiwei Xiao, Pranav Sukumar, Jesus Villalobos, D. Garcia
{"title":"改进的pririellearn问题生成器测试","authors":"Aayush Shah, Alan Lee, Chris Chi, Ruiwei Xiao, Pranav Sukumar, Jesus Villalobos, D. Garcia","doi":"10.1145/3478432.3499113","DOIUrl":null,"url":null,"abstract":"With many institutions forced online due to the pandemic, assessments became a challenge for many educators. Take-home exams provided the flexibility required for varied student needs (and time zones), but they were vulnerable to cheating. In response, many turned to tools that could present a different exam for every student. PrairieLearn is a feature-rich open-source package that allows educators to author randomized Question Generators; we have been using the tool extensively for the last two years, and it has a fast-growing educator user base. One of the first issues we noticed with the system was that the only way to quality assure (QA) a question was to click the new variant button, which would spin whatever internal random number generators were used again to produce a new question. Sometimes it was the same one you had just seen, and other times it would never seem to \"hit'' on the variant you were looking to debug. This poster describes our team's work to solve this problem through the design of an API that would allow a question to declare how many total variants it had, and be asked to render variant i. The user interface could then be extended to list what variant the QA team was viewing out of the total (e.g., 7/50), and a next, previous and go to a particular variant buttons would allow for the team to easily QA all variants.","PeriodicalId":113773,"journal":{"name":"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improved Testing of PrairieLearn Question Generators\",\"authors\":\"Aayush Shah, Alan Lee, Chris Chi, Ruiwei Xiao, Pranav Sukumar, Jesus Villalobos, D. Garcia\",\"doi\":\"10.1145/3478432.3499113\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With many institutions forced online due to the pandemic, assessments became a challenge for many educators. Take-home exams provided the flexibility required for varied student needs (and time zones), but they were vulnerable to cheating. In response, many turned to tools that could present a different exam for every student. PrairieLearn is a feature-rich open-source package that allows educators to author randomized Question Generators; we have been using the tool extensively for the last two years, and it has a fast-growing educator user base. One of the first issues we noticed with the system was that the only way to quality assure (QA) a question was to click the new variant button, which would spin whatever internal random number generators were used again to produce a new question. Sometimes it was the same one you had just seen, and other times it would never seem to \\\"hit'' on the variant you were looking to debug. This poster describes our team's work to solve this problem through the design of an API that would allow a question to declare how many total variants it had, and be asked to render variant i. The user interface could then be extended to list what variant the QA team was viewing out of the total (e.g., 7/50), and a next, previous and go to a particular variant buttons would allow for the team to easily QA all variants.\",\"PeriodicalId\":113773,\"journal\":{\"name\":\"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3478432.3499113\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3478432.3499113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improved Testing of PrairieLearn Question Generators
With many institutions forced online due to the pandemic, assessments became a challenge for many educators. Take-home exams provided the flexibility required for varied student needs (and time zones), but they were vulnerable to cheating. In response, many turned to tools that could present a different exam for every student. PrairieLearn is a feature-rich open-source package that allows educators to author randomized Question Generators; we have been using the tool extensively for the last two years, and it has a fast-growing educator user base. One of the first issues we noticed with the system was that the only way to quality assure (QA) a question was to click the new variant button, which would spin whatever internal random number generators were used again to produce a new question. Sometimes it was the same one you had just seen, and other times it would never seem to "hit'' on the variant you were looking to debug. This poster describes our team's work to solve this problem through the design of an API that would allow a question to declare how many total variants it had, and be asked to render variant i. The user interface could then be extended to list what variant the QA team was viewing out of the total (e.g., 7/50), and a next, previous and go to a particular variant buttons would allow for the team to easily QA all variants.