Sentence-BERT Distinguishes Good and Bad Essays in Cross-prompt Automated Essay Scoring

Toru Sasaki, Tomonari Masada
{"title":"Sentence-BERT Distinguishes Good and Bad Essays in Cross-prompt Automated Essay Scoring","authors":"Toru Sasaki, Tomonari Masada","doi":"10.1109/ICDMW58026.2022.00045","DOIUrl":null,"url":null,"abstract":"Automated Essay Scoring (AES) refers to a set of processes that automatically assigns grades to student-written essays with machine learning models. Existing AES models are mostly trained prompt-specifically with supervised learning, which requires the essay prompt to be accessible to the system vendor at the time of model training. However, essay prompts for high-stakes testing should usually be kept confidential before the test date, which demands the model to be cross-promptly trainable with pre-scored essay data already in hands. Document embeddings obtained from pretrained language models such as Sentence-BERT (sbert) are primarily expected to represent the semantic content of the text. We hypothesize SBERT embeddings also contain assessment-relevant elements that are extractable by document embedding decomposition through Principal Component Analysis (PCA) enhanced with Normalized Discounted Cumulative Gain (nDCG) measurement. The identified evaluative elements in the entire embedding space of the source essays are then cross-promptly transferred to the target essays written on different prompts for binary clustering task of dividing high/low-scored groups. The result implies non-finetuned SBERT already contains evaluative elements to distinguish good and bad essays.","PeriodicalId":146687,"journal":{"name":"2022 IEEE International Conference on Data Mining Workshops (ICDMW)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Data Mining Workshops (ICDMW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDMW58026.2022.00045","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Automated Essay Scoring (AES) refers to a set of processes that automatically assigns grades to student-written essays with machine learning models. Existing AES models are mostly trained prompt-specifically with supervised learning, which requires the essay prompt to be accessible to the system vendor at the time of model training. However, essay prompts for high-stakes testing should usually be kept confidential before the test date, which demands the model to be cross-promptly trainable with pre-scored essay data already in hands. Document embeddings obtained from pretrained language models such as Sentence-BERT (sbert) are primarily expected to represent the semantic content of the text. We hypothesize SBERT embeddings also contain assessment-relevant elements that are extractable by document embedding decomposition through Principal Component Analysis (PCA) enhanced with Normalized Discounted Cumulative Gain (nDCG) measurement. The identified evaluative elements in the entire embedding space of the source essays are then cross-promptly transferred to the target essays written on different prompts for binary clustering task of dividing high/low-scored groups. The result implies non-finetuned SBERT already contains evaluative elements to distinguish good and bad essays.
句子- bert在交叉提示自动作文评分中区分好文章和坏文章
自动论文评分(Automated Essay Scoring, AES)是指一组使用机器学习模型自动为学生写的论文分配分数的过程。现有的AES模型大多是即时训练的——特别是有监督的学习,这要求系统供应商在模型训练时可以访问论文提示。然而,高风险考试的作文提示通常应该在考试日期前保密,这就要求该模型可以交叉快速训练,并且已经掌握了预评分的作文数据。从预训练语言模型(如Sentence-BERT (sbert))中获得的文档嵌入主要用于表示文本的语义内容。我们假设SBERT嵌入还包含与评估相关的元素,这些元素可以通过主成分分析(PCA)和归一化贴现累积增益(nDCG)测量增强的文档嵌入分解来提取。然后,在源文章的整个嵌入空间中识别出的评价元素被交叉迅速地转移到在不同提示上写的目标文章中,用于划分高分/低分组的二元聚类任务。结果表明,非微调的SBERT已经包含了区分好文章和坏文章的评价元素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信