{"title":"Evaluating the quality of student-generated content in learnersourcing: A large language model based approach","authors":"Kangkang Li, Chengyang Qian, Xianmin Yang","doi":"10.1007/s10639-024-12851-4","DOIUrl":null,"url":null,"abstract":"<p>In learnersoucing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students’ evaluations of SGC face the problems of inefficiency and cold start. The methods of combining feature engineering and deep learning suffer from the problems of insufficient accuracy and low scalability. This study introduced an automated SGC quality evaluation method based on a large language model (LLM). The method made a comprehensive evaluation by allowing LLM to simulate the cognitive process of human evaluation through the Reason-Act-Evaluate (RAE) prompt and integrating an assisted model to analyze the external features of SGCs. The study utilized the SGCs in a learnersourcing platform to experiment with the feasibility of the method. The results showed that LLM is able to achieve high agreement with experts on the quality evaluation of SGC through RAE prompt, and better results can be achieved with the help of assisted models.</p>","PeriodicalId":51494,"journal":{"name":"Education and Information Technologies","volume":null,"pages":null},"PeriodicalIF":4.8000,"publicationDate":"2024-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Education and Information Technologies","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1007/s10639-024-12851-4","RegionNum":2,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
In learnersoucing, automatic evaluation of student-generated content (SGC) is significant as it streamlines the evaluation process, provides timely feedback, and enhances the objectivity of grading, ultimately supporting more effective and efficient learning outcomes. However, the methods of aggregating students’ evaluations of SGC face the problems of inefficiency and cold start. The methods of combining feature engineering and deep learning suffer from the problems of insufficient accuracy and low scalability. This study introduced an automated SGC quality evaluation method based on a large language model (LLM). The method made a comprehensive evaluation by allowing LLM to simulate the cognitive process of human evaluation through the Reason-Act-Evaluate (RAE) prompt and integrating an assisted model to analyze the external features of SGCs. The study utilized the SGCs in a learnersourcing platform to experiment with the feasibility of the method. The results showed that LLM is able to achieve high agreement with experts on the quality evaluation of SGC through RAE prompt, and better results can be achieved with the help of assisted models.
期刊介绍:
The Journal of Education and Information Technologies (EAIT) is a platform for the range of debates and issues in the field of Computing Education as well as the many uses of information and communication technology (ICT) across many educational subjects and sectors. It probes the use of computing to improve education and learning in a variety of settings, platforms and environments.
The journal aims to provide perspectives at all levels, from the micro level of specific pedagogical approaches in Computing Education and applications or instances of use in classrooms, to macro concerns of national policies and major projects; from pre-school classes to adults in tertiary institutions; from teachers and administrators to researchers and designers; from institutions to online and lifelong learning. The journal is embedded in the research and practice of professionals within the contemporary global context and its breadth and scope encourage debate on fundamental issues at all levels and from different research paradigms and learning theories. The journal does not proselytize on behalf of the technologies (whether they be mobile, desktop, interactive, virtual, games-based or learning management systems) but rather provokes debate on all the complex relationships within and between computing and education, whether they are in informal or formal settings. It probes state of the art technologies in Computing Education and it also considers the design and evaluation of digital educational artefacts. The journal aims to maintain and expand its international standing by careful selection on merit of the papers submitted, thus providing a credible ongoing forum for debate and scholarly discourse. Special Issues are occasionally published to cover particular issues in depth. EAIT invites readers to submit papers that draw inferences, probe theory and create new knowledge that informs practice, policy and scholarship. Readers are also invited to comment and reflect upon the argument and opinions published. EAIT is the official journal of the Technical Committee on Education of the International Federation for Information Processing (IFIP) in partnership with UNESCO.