EXPLANATION-BASED AUTOMATED ASSESSMENT OF OPEN ENDED LEARNER RESPONSES

V. Rus
{"title":"EXPLANATION-BASED AUTOMATED ASSESSMENT OF OPEN ENDED LEARNER RESPONSES","authors":"V. Rus","doi":"10.12753/2066-026x-18-087","DOIUrl":null,"url":null,"abstract":"Open ended assessment items require students to freely articulate their thinking as opposed to, for instance, multiple choice questions. Such free generation of answers by students enables what we may call true assessment because these answers offer a direct view of learners’ mental models. Nevertheless, assessing open ended learner responses is extremely challenging, e.g., if done manually by experts it becomes prohibitively expensive to scale up to millions of learners. To address this scalability challenge, automated methods to assess students' free responses are being explored. To this end, we present a novel solution to automatically assess open ended learner responses based on recent advances in computational linguistics and optimization algorithms. Our proposed solution accounts for linguistic phenomena such as anaphora resolution and negation in order to reach a deeper level of semantic interpretation of student answers. This is a key advantage compared to previous methods that focus primarily on distributional semantic representations of texts. Furthermore, our method provides both a holistic score as well as a detailed explanation of the score by performing a concept-level analysis of student responses. We present results obtained with the proposed method on a dataset that is widely used to evaluate automated methods for assessing open ended learner responses. The results indicate that our method is extremely competitive or surpasses the performance of previously proposed methods. Furthermore, by being able to pick on concepts students have yet to articulate, it enables the development of more personalized and dynamic generation of feedback in intelligent tutoring systems.","PeriodicalId":371908,"journal":{"name":"14th International Conference eLearning and Software for Education","volume":"69 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"14th International Conference eLearning and Software for Education","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.12753/2066-026x-18-087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Open ended assessment items require students to freely articulate their thinking as opposed to, for instance, multiple choice questions. Such free generation of answers by students enables what we may call true assessment because these answers offer a direct view of learners’ mental models. Nevertheless, assessing open ended learner responses is extremely challenging, e.g., if done manually by experts it becomes prohibitively expensive to scale up to millions of learners. To address this scalability challenge, automated methods to assess students' free responses are being explored. To this end, we present a novel solution to automatically assess open ended learner responses based on recent advances in computational linguistics and optimization algorithms. Our proposed solution accounts for linguistic phenomena such as anaphora resolution and negation in order to reach a deeper level of semantic interpretation of student answers. This is a key advantage compared to previous methods that focus primarily on distributional semantic representations of texts. Furthermore, our method provides both a holistic score as well as a detailed explanation of the score by performing a concept-level analysis of student responses. We present results obtained with the proposed method on a dataset that is widely used to evaluate automated methods for assessing open ended learner responses. The results indicate that our method is extremely competitive or surpasses the performance of previously proposed methods. Furthermore, by being able to pick on concepts students have yet to articulate, it enables the development of more personalized and dynamic generation of feedback in intelligent tutoring systems.
开放式学习者反应的基于解释的自动评估
开放式评估项目要求学生自由表达自己的想法,而不是选择题。这种由学生自由生成的答案使我们可以称之为真正的评估,因为这些答案提供了对学习者心理模型的直接看法。然而,评估开放式学习者的反应是极具挑战性的,例如,如果由专家手动完成,那么扩展到数百万学习者就会变得非常昂贵。为了应对这种可扩展性的挑战,人们正在探索评估学生自由回答的自动化方法。为此,我们基于计算语言学和优化算法的最新进展,提出了一种新的解决方案来自动评估开放式学习者的反应。我们提出的解决方案考虑了诸如回指消解和否定等语言现象,以便达到对学生答案的更深层次的语义解释。与以前主要关注文本的分布语义表示的方法相比,这是一个关键优势。此外,我们的方法既提供了一个整体的分数,也通过对学生的反应进行概念层面的分析来详细解释分数。我们在一个广泛用于评估开放式学习者反应的自动化方法的数据集上展示了用所提出的方法获得的结果。结果表明,我们的方法具有很强的竞争力,甚至超过了以前提出的方法的性能。此外,通过能够挑选学生尚未阐明的概念,它可以在智能辅导系统中开发更加个性化和动态的反馈生成。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信