R2DE: a NLP approach to estimating IRT parameters of newly generated questions

Luca Benedetto, Andrea Cappelli, R. Turrin, P. Cremonesi
{"title":"R2DE: a NLP approach to estimating IRT parameters of newly generated questions","authors":"Luca Benedetto, Andrea Cappelli, R. Turrin, P. Cremonesi","doi":"10.1145/3375462.3375517","DOIUrl":null,"url":null,"abstract":"The main objective of exams consists in performing an assessment of students' expertise on a specific subject. Such expertise, also referred to as skill or knowledge level, can then be leveraged in different ways (e.g., to assign a grade to the students, to understand whether a student might need some support, etc.). Similarly, the questions appearing in the exams have to be assessed in some way before being used to evaluate students. Standard approaches to questions' assessment are either subjective (e.g., assessment by human experts) or introduce a long delay in the process of question generation (e.g., pretesting with real students). In this work we introduce R2DE (which is a Regressor for Difficulty and Discrimination Estimation), a model capable of assessing newly generated multiple-choice questions by looking at the text of the question and the text of the possible choices. In particular, it can estimate the difficulty and the discrimination of each question, as they are defined in Item Response Theory. We also present the results of extensive experiments we carried out on a real world large scale dataset coming from an e-learning platform, showing that our model can be used to perform an initial assessment of newly created questions and ease some of the problems that arise in question generation.","PeriodicalId":355800,"journal":{"name":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","volume":"115 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"27","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Tenth International Conference on Learning Analytics & Knowledge","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375462.3375517","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 27

Abstract

The main objective of exams consists in performing an assessment of students' expertise on a specific subject. Such expertise, also referred to as skill or knowledge level, can then be leveraged in different ways (e.g., to assign a grade to the students, to understand whether a student might need some support, etc.). Similarly, the questions appearing in the exams have to be assessed in some way before being used to evaluate students. Standard approaches to questions' assessment are either subjective (e.g., assessment by human experts) or introduce a long delay in the process of question generation (e.g., pretesting with real students). In this work we introduce R2DE (which is a Regressor for Difficulty and Discrimination Estimation), a model capable of assessing newly generated multiple-choice questions by looking at the text of the question and the text of the possible choices. In particular, it can estimate the difficulty and the discrimination of each question, as they are defined in Item Response Theory. We also present the results of extensive experiments we carried out on a real world large scale dataset coming from an e-learning platform, showing that our model can be used to perform an initial assessment of newly created questions and ease some of the problems that arise in question generation.
R2DE:一种估计新生成问题的IRT参数的NLP方法
考试的主要目的在于评估学生对某一特定学科的专业知识。这样的专业知识,也被称为技能或知识水平,然后可以以不同的方式加以利用(例如,给学生分配分数,了解学生是否需要一些支持,等等)。同样,考试中出现的问题在用来评估学生之前,也必须以某种方式进行评估。问题评估的标准方法要么是主观的(例如,由人类专家评估),要么在问题生成过程中引入很长的延迟(例如,与真实学生进行预测)。在这项工作中,我们引入了R2DE(难度和区分估计的回归因子),这是一个能够通过查看问题文本和可能选择的文本来评估新生成的多项选择题的模型。特别是,它可以估计每个问题的难度和歧视,因为它们在项目反应理论中定义。我们还展示了我们在来自电子学习平台的真实世界大规模数据集上进行的广泛实验的结果,表明我们的模型可用于对新创建的问题进行初步评估,并缓解问题生成过程中出现的一些问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信