Evaluating the quality of student-generated content in learnersourcing: A large language model based approach

IF 4.8 2区 教育学 Q1 EDUCATION & EDUCATIONAL RESEARCH
Kangkang Li, Chengyang Qian, Xianmin Yang
{"title":"Evaluating the quality of student-generated content in learnersourcing: A large language model based approach","authors":"Kangkang Li, Chengyang Qian, Xianmin Yang","doi":"10.1007/s10639-024-12851-4","DOIUrl":null,"url":null,"abstract":"<p>In learnersoucing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students’ evaluations of SGC face the problems of inefficiency and cold start. The methods of combining feature engineering and deep learning suffer from the problems of insufficient accuracy and low scalability. This study introduced an automated SGC quality evaluation method based on a large language model (LLM). The method made a comprehensive evaluation by allowing LLM to simulate the cognitive process of human evaluation through the Reason-Act-Evaluate (RAE) prompt and integrating an assisted model to analyze the external features of SGCs. The study utilized the SGCs in a learnersourcing platform to experiment with the feasibility of the method. The results showed that LLM is able to achieve high agreement with experts on the quality evaluation of SGC through RAE prompt, and better results can be achieved with the help of assisted models.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Education and Information Technologies","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1007/s10639-024-12851-4","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0

Abstract

In learnersoucing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students’ evaluations of SGC face the problems of inefficiency and cold start. The methods of combining feature engineering and deep learning suffer from the problems of insufficient accuracy and low scalability. This study introduced an automated SGC quality evaluation method based on a large language model (LLM). The method made a comprehensive evaluation by allowing LLM to simulate the cognitive process of human evaluation through the Reason-Act-Evaluate (RAE) prompt and integrating an assisted model to analyze the external features of SGCs. The study utilized the SGCs in a learnersourcing platform to experiment with the feasibility of the method. The results showed that LLM is able to achieve high agreement with experts on the quality evaluation of SGC through RAE prompt, and better results can be achieved with the help of assisted models.

Abstract Image

评估学习者资源中学生生成内容的质量:基于大语言模型的方法
在学习者教学中,对学生生成的内容(SGC)进行自动评价意义重大,因为它可以简化评价过程,提供及时反馈,并增强评分的客观性,最终支持更有效、更高效的学习成果。然而,聚合学生对 SGC 的评价的方法面临着低效和冷启动的问题。结合特征工程和深度学习的方法存在准确性不足和可扩展性低的问题。本研究介绍了一种基于大语言模型(LLM)的 SGC 质量自动评价方法。该方法通过RAE(Reason-Act-Evaluate)提示,让LLM模拟人类评价的认知过程,并结合辅助模型分析SGC的外部特征,从而进行综合评价。该研究利用学习者外包平台中的 SGCs 对该方法的可行性进行了实验。结果表明,LLM 能够通过 RAE 提示在 SGC 质量评价方面与专家达成高度一致,而在辅助模型的帮助下可以取得更好的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Education and Information Technologies
Education and Information Technologies EDUCATION & EDUCATIONAL RESEARCH-
CiteScore
10.00
自引率
12.70%
发文量
610
期刊介绍: The Journal of Education and Information Technologies (EAIT) is a platform for the range of debates and issues in the field of Computing Education as well as the many uses of information and communication technology (ICT) across many educational subjects and sectors. It probes the use of computing to improve education and learning in a variety of settings, platforms and environments. The journal aims to provide perspectives at all levels, from the micro level of specific pedagogical approaches in Computing Education and applications or instances of use in classrooms, to macro concerns of national policies and major projects; from pre-school classes to adults in tertiary institutions; from teachers and administrators to researchers and designers; from institutions to online and lifelong learning. The journal is embedded in the research and practice of professionals within the contemporary global context and its breadth and scope encourage debate on fundamental issues at all levels and from different research paradigms and learning theories. The journal does not proselytize on behalf of the technologies (whether they be mobile, desktop, interactive, virtual, games-based or learning management systems) but rather provokes debate on all the complex relationships within and between computing and education, whether they are in informal or formal settings. It probes state of the art technologies in Computing Education and it also considers the design and evaluation of digital educational artefacts.  The journal aims to maintain and expand its international standing by careful selection on merit of the papers submitted, thus providing a credible ongoing forum for debate and scholarly discourse. Special Issues are occasionally published to cover particular issues in depth. EAIT invites readers to submit papers that draw inferences, probe theory and create new knowledge that informs practice, policy and scholarship. Readers are also invited to comment and reflect upon the argument and opinions published. EAIT is the official journal of the Technical Committee on Education of the International Federation for Information Processing (IFIP) in partnership with UNESCO.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信