选择学生撰写的问题进行总结性评估

IF 1.9 Q2 EDUCATION & EDUCATIONAL RESEARCH
Alice Huang,Dale Hancock,Matthew Clemson,Giselle Yeo,Dylan Harney,Paul Denny,Gareth Denyer
{"title":"选择学生撰写的问题进行总结性评估","authors":"Alice Huang,Dale Hancock,Matthew Clemson,Giselle Yeo,Dylan Harney,Paul Denny,Gareth Denyer","doi":"10.25304/rlt.v29.2517","DOIUrl":null,"url":null,"abstract":"Production of high-quality multiple-choice questions (MCQs) for both formative and summative assessments is a time-consuming task requiring great skill, creativity and insight. The transition to online examinations, with the concomitant exposure of previously tried-and-tested MCQs, exacerbates the challenges of question production and highlights the need for innovative solutions. Several groups have shown that it is practical to leverage the student cohort to produce a very large number of syllabus-aligned MCQs for study banks. Although student-generated questions are well suited for formative feedback and practice activities, they are generally not thought to be suitable for high-stakes assessments. In this study, we aimed to demonstrate that training can be provided to students in a scalable fashion to generate questions of similar quality to those produced by experts and that identification of suitable questions can be achieved with minimal academic review and editing. Second-year biochemistry and molecular biology students were assigned a series of activities designed to coach them in the art of writing and critiquing MCQs. This training resulted in the production of over 1000 MCQs that were then gauged for potential by either expert academic judgement or via a data-driven approach in which the questions were trialled objectively in a low-stakes test. Questions selected by either method were then deployed in a high-stakes in-semester assessment alongside questions from two academically authored sources: textbook-derived MCQs and past paper questions. A total of 120 MCQs from these four sources were deployed in assessments attempted by over 600 students. Each question was subjected to rigorous performance analysis, including the calculation of standard metrics from classical test theory and more sophisticated item response theory (IRT) measures. The results showed that MCQs authored by students, and selected at low cost, performed as well as questions authored by academics, illustrating the potential of this strategy for the efficient creation of large numbers of high-quality MCQs for summative assessment.","PeriodicalId":46691,"journal":{"name":"Research in Learning Technology","volume":"24 1","pages":""},"PeriodicalIF":1.9000,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Selecting student-authored questions for summative assessments\",\"authors\":\"Alice Huang,Dale Hancock,Matthew Clemson,Giselle Yeo,Dylan Harney,Paul Denny,Gareth Denyer\",\"doi\":\"10.25304/rlt.v29.2517\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Production of high-quality multiple-choice questions (MCQs) for both formative and summative assessments is a time-consuming task requiring great skill, creativity and insight. The transition to online examinations, with the concomitant exposure of previously tried-and-tested MCQs, exacerbates the challenges of question production and highlights the need for innovative solutions. Several groups have shown that it is practical to leverage the student cohort to produce a very large number of syllabus-aligned MCQs for study banks. Although student-generated questions are well suited for formative feedback and practice activities, they are generally not thought to be suitable for high-stakes assessments. In this study, we aimed to demonstrate that training can be provided to students in a scalable fashion to generate questions of similar quality to those produced by experts and that identification of suitable questions can be achieved with minimal academic review and editing. Second-year biochemistry and molecular biology students were assigned a series of activities designed to coach them in the art of writing and critiquing MCQs. This training resulted in the production of over 1000 MCQs that were then gauged for potential by either expert academic judgement or via a data-driven approach in which the questions were trialled objectively in a low-stakes test. Questions selected by either method were then deployed in a high-stakes in-semester assessment alongside questions from two academically authored sources: textbook-derived MCQs and past paper questions. A total of 120 MCQs from these four sources were deployed in assessments attempted by over 600 students. Each question was subjected to rigorous performance analysis, including the calculation of standard metrics from classical test theory and more sophisticated item response theory (IRT) measures. The results showed that MCQs authored by students, and selected at low cost, performed as well as questions authored by academics, illustrating the potential of this strategy for the efficient creation of large numbers of high-quality MCQs for summative assessment.\",\"PeriodicalId\":46691,\"journal\":{\"name\":\"Research in Learning Technology\",\"volume\":\"24 1\",\"pages\":\"\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2021-02-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research in Learning Technology\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.25304/rlt.v29.2517\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research in Learning Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25304/rlt.v29.2517","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

摘要

为形成性和总结性评估制作高质量的选择题(mcq)是一项耗时的任务,需要很高的技能、创造力和洞察力。向在线考试的过渡,伴随着之前经过验证和测试的mcq的暴露,加剧了问题制作的挑战,并突出了对创新解决方案的需求。一些研究小组已经表明,利用学生群体为学习库制作大量与教学大纲一致的mcq是可行的。虽然学生提出的问题非常适合形成性反馈和实践活动,但它们通常被认为不适合高风险评估。在这项研究中,我们的目的是证明可以以可扩展的方式向学生提供培训,以产生与专家产生的问题相似的质量,并且可以通过最少的学术审查和编辑来确定合适的问题。生物化学和分子生物学二年级的学生被分配了一系列活动,旨在指导他们写作和批评mcq的艺术。这次培训产生了1000多个mcq,然后通过专家学术判断或通过数据驱动的方法(在低风险测试中客观地测试问题)来衡量这些问题的潜力。通过这两种方法选择的问题随后被部署在一个高风险的学期评估中,同时还有两个学术来源的问题:教科书衍生的mcq和过去的论文问题。来自这四个来源的120个mcq被部署在600多名学生的评估中。每个问题都经过严格的性能分析,包括从经典测试理论和更复杂的项目反应理论(IRT)测量的标准指标的计算。结果显示,由学生撰写并以低成本选择的mcq的表现与学者撰写的问题一样好,这说明了该策略在有效创建大量高质量的总结性评估mcq方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Selecting student-authored questions for summative assessments
Production of high-quality multiple-choice questions (MCQs) for both formative and summative assessments is a time-consuming task requiring great skill, creativity and insight. The transition to online examinations, with the concomitant exposure of previously tried-and-tested MCQs, exacerbates the challenges of question production and highlights the need for innovative solutions. Several groups have shown that it is practical to leverage the student cohort to produce a very large number of syllabus-aligned MCQs for study banks. Although student-generated questions are well suited for formative feedback and practice activities, they are generally not thought to be suitable for high-stakes assessments. In this study, we aimed to demonstrate that training can be provided to students in a scalable fashion to generate questions of similar quality to those produced by experts and that identification of suitable questions can be achieved with minimal academic review and editing. Second-year biochemistry and molecular biology students were assigned a series of activities designed to coach them in the art of writing and critiquing MCQs. This training resulted in the production of over 1000 MCQs that were then gauged for potential by either expert academic judgement or via a data-driven approach in which the questions were trialled objectively in a low-stakes test. Questions selected by either method were then deployed in a high-stakes in-semester assessment alongside questions from two academically authored sources: textbook-derived MCQs and past paper questions. A total of 120 MCQs from these four sources were deployed in assessments attempted by over 600 students. Each question was subjected to rigorous performance analysis, including the calculation of standard metrics from classical test theory and more sophisticated item response theory (IRT) measures. The results showed that MCQs authored by students, and selected at low cost, performed as well as questions authored by academics, illustrating the potential of this strategy for the efficient creation of large numbers of high-quality MCQs for summative assessment.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Research in Learning Technology
Research in Learning Technology EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
6.50
自引率
0.00%
发文量
13
审稿时长
20 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信