Research on A Machine Scoring Method of Role-Play Section in English Oral Test

Xinguang Li, Zhihe Yang, Shuai Chen, Shanxian Ma
{"title":"Research on A Machine Scoring Method of Role-Play Section in English Oral Test","authors":"Xinguang Li, Zhihe Yang, Shuai Chen, Shanxian Ma","doi":"10.1145/3498851.3498983","DOIUrl":null,"url":null,"abstract":"Computer-assisted instruction has been widely implemented in language learning since the remarkable development of speech technology and natural language processing. In English teaching, the traditional manual evaluation mode has been replaced by the computer. In this paper, we introduce a machine scoring method of Role-Play Section in the English oral test. Specifically, we design a scoring method of Role-Play section in Computer-based English Listening and Speaking Test (CELST) of the Guangdong college entrance examination (GDCEE), which can simulate human scoring. The role-play section is also called Three Questions (TQ) and Five Answers (FA), so two scoring modules are established, respectively. According to the features of the role-play section and the given manual scoring scale, the scoring task can be regarded as a short text similarity evaluation task, and we propose a corresponding multi-index text similarity evaluation method. Statistic-based word matching and keyword-focused semantic similarity evaluations are adopted in the TQ and FA scoring modules. Additionally, we consider the grammar factor and utilize semantic similarity-combined dependency parsing evaluation for TQ scoring. Based on the above five evaluative indicators, we linearly integrate them and corresponding expertise-based weights with linear regression optimization, thus constructing a comprehensive scoring model of the role-play Section in CELST. Experiments were conducted using real oral data of examinees in GDCEE. Results have indicated that the machine scoring method achieves impressive data consistency with human scoring.","PeriodicalId":89230,"journal":{"name":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","volume":"29 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3498851.3498983","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Computer-assisted instruction has been widely implemented in language learning since the remarkable development of speech technology and natural language processing. In English teaching, the traditional manual evaluation mode has been replaced by the computer. In this paper, we introduce a machine scoring method of Role-Play Section in the English oral test. Specifically, we design a scoring method of Role-Play section in Computer-based English Listening and Speaking Test (CELST) of the Guangdong college entrance examination (GDCEE), which can simulate human scoring. The role-play section is also called Three Questions (TQ) and Five Answers (FA), so two scoring modules are established, respectively. According to the features of the role-play section and the given manual scoring scale, the scoring task can be regarded as a short text similarity evaluation task, and we propose a corresponding multi-index text similarity evaluation method. Statistic-based word matching and keyword-focused semantic similarity evaluations are adopted in the TQ and FA scoring modules. Additionally, we consider the grammar factor and utilize semantic similarity-combined dependency parsing evaluation for TQ scoring. Based on the above five evaluative indicators, we linearly integrate them and corresponding expertise-based weights with linear regression optimization, thus constructing a comprehensive scoring model of the role-play Section in CELST. Experiments were conducted using real oral data of examinees in GDCEE. Results have indicated that the machine scoring method achieves impressive data consistency with human scoring.
英语口语考试角色扮演部分机器评分方法研究
随着语音技术和自然语言处理的显著发展,计算机辅助教学在语言学习中得到了广泛的应用。在英语教学中,传统的人工评价模式已经被计算机所取代。本文介绍了一种英语口语考试角色扮演部分的机器评分方法。具体来说,我们设计了一种模拟人工评分的广东高考英语听说机考(CELST)角色扮演部分评分方法。角色扮演部分也被称为三问五答,所以分别建立了两个评分模块。根据角色扮演部分的特点和给定的人工评分量表,将评分任务视为短文本相似度评价任务,并提出了相应的多指标文本相似度评价方法。TQ和FA评分模块采用基于统计的词匹配和以关键词为中心的语义相似度评估。此外,我们还考虑了语法因素,并利用语义相似度结合依赖解析评估来进行TQ评分。基于以上五个评价指标,我们通过线性回归优化将其与相应的基于专家知识的权重进行线性整合,从而构建CELST中角色扮演部分的综合评分模型。实验采用高考考生的真实口语数据进行。结果表明,机器评分方法与人工评分方法达到了令人印象深刻的数据一致性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信