Improved Testing of PrairieLearn Question Generators

Aayush Shah, Alan Lee, Chris Chi, Ruiwei Xiao, Pranav Sukumar, Jesus Villalobos, D. Garcia
{"title":"Improved Testing of PrairieLearn Question Generators","authors":"Aayush Shah, Alan Lee, Chris Chi, Ruiwei Xiao, Pranav Sukumar, Jesus Villalobos, D. Garcia","doi":"10.1145/3478432.3499113","DOIUrl":null,"url":null,"abstract":"With many institutions forced online due to the pandemic, assessments became a challenge for many educators. Take-home exams provided the flexibility required for varied student needs (and time zones), but they were vulnerable to cheating. In response, many turned to tools that could present a different exam for every student. PrairieLearn is a feature-rich open-source package that allows educators to author randomized Question Generators; we have been using the tool extensively for the last two years, and it has a fast-growing educator user base. One of the first issues we noticed with the system was that the only way to quality assure (QA) a question was to click the new variant button, which would spin whatever internal random number generators were used again to produce a new question. Sometimes it was the same one you had just seen, and other times it would never seem to \"hit'' on the variant you were looking to debug. This poster describes our team's work to solve this problem through the design of an API that would allow a question to declare how many total variants it had, and be asked to render variant i. The user interface could then be extended to list what variant the QA team was viewing out of the total (e.g., 7/50), and a next, previous and go to a particular variant buttons would allow for the team to easily QA all variants.","PeriodicalId":113773,"journal":{"name":"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 53rd ACM Technical Symposium on Computer Science Education V. 2","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3478432.3499113","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With many institutions forced online due to the pandemic, assessments became a challenge for many educators. Take-home exams provided the flexibility required for varied student needs (and time zones), but they were vulnerable to cheating. In response, many turned to tools that could present a different exam for every student. PrairieLearn is a feature-rich open-source package that allows educators to author randomized Question Generators; we have been using the tool extensively for the last two years, and it has a fast-growing educator user base. One of the first issues we noticed with the system was that the only way to quality assure (QA) a question was to click the new variant button, which would spin whatever internal random number generators were used again to produce a new question. Sometimes it was the same one you had just seen, and other times it would never seem to "hit'' on the variant you were looking to debug. This poster describes our team's work to solve this problem through the design of an API that would allow a question to declare how many total variants it had, and be asked to render variant i. The user interface could then be extended to list what variant the QA team was viewing out of the total (e.g., 7/50), and a next, previous and go to a particular variant buttons would allow for the team to easily QA all variants.
改进的pririellearn问题生成器测试
由于疫情,许多机构被迫上网,评估对许多教育工作者来说成为一项挑战。家庭作业考试为不同学生的需求(和时区)提供了灵活性,但它们很容易受到作弊的影响。作为回应,许多人转向了可以为每个学生提供不同考试的工具。pririelearn是一个功能丰富的开源软件包,允许教育工作者编写随机问题生成器;在过去的两年里,我们一直在广泛地使用这个工具,它拥有一个快速增长的教育用户群。我们在系统中注意到的第一个问题是,质量保证(QA)问题的唯一方法是点击新变体按钮,这将旋转任何内部随机数生成器再次使用以产生新问题。有时它与您刚刚看到的相同,有时它似乎永远不会“命中”您要调试的变体。这张海报描述了我们团队通过设计一个API来解决这个问题的工作,该API允许一个问题声明它有多少个变体,并要求呈现变体i。然后用户界面可以扩展到列出QA团队在总数中查看的变体(例如7/50),并且下一步,以前和转到特定变体按钮将允许团队轻松地对所有变体进行QA。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信