{"title":"Can Artificial Intelligence help provide more sustainable feed-back?","authors":"Eloi Puertas Prats, María Elena Cano García","doi":"10.1344/der.2024.45.50-58","DOIUrl":null,"url":null,"abstract":"\nPeer assessment is a strategy wherein students evaluate the level, value, or quality of their peers' work within the same educational setting. Research has demonstrated that peer evaluation processes positively impact skill development and academic performance. By applying evaluation criteria to their peers' work and offering comments, corrections, and suggestions for improvement, students not only enhance their own work but also cultivate critical thinking skills. To effectively nurture students' role as evaluators, deliberate and structured opportunities for practice, along with training and guidance, are essential.\n\n\nArtificial Intelligence (AI) can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.\n\nThis paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an Artificial Intelligence (AI) Large Language Model (LM) was trained to achieve automated evaluation. The findings show the similarity between human evaluation and automated evaluation, which allows expectations to be generated regarding the possibilities of AI for this purpose. The challenges and prospects of this process are discussed, along with recommendations for\noptimizing results.\n\nArtificial intelligence can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.\nThis paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an artificial intelligence Large Language Model was trained to achieve automated evaluation. The challenges and prospects of this process are discussed, along with recommendations for optimizing results.","PeriodicalId":44576,"journal":{"name":"Digital Education Review","volume":null,"pages":null},"PeriodicalIF":1.2000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Education Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1344/der.2024.45.50-58","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
Peer assessment is a strategy wherein students evaluate the level, value, or quality of their peers' work within the same educational setting. Research has demonstrated that peer evaluation processes positively impact skill development and academic performance. By applying evaluation criteria to their peers' work and offering comments, corrections, and suggestions for improvement, students not only enhance their own work but also cultivate critical thinking skills. To effectively nurture students' role as evaluators, deliberate and structured opportunities for practice, along with training and guidance, are essential.
Artificial Intelligence (AI) can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an Artificial Intelligence (AI) Large Language Model (LM) was trained to achieve automated evaluation. The findings show the similarity between human evaluation and automated evaluation, which allows expectations to be generated regarding the possibilities of AI for this purpose. The challenges and prospects of this process are discussed, along with recommendations for
optimizing results.
Artificial intelligence can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an artificial intelligence Large Language Model was trained to achieve automated evaluation. The challenges and prospects of this process are discussed, along with recommendations for optimizing results.
期刊介绍:
Digital Education Review (DER) is a scientific, open and peer review journal designed as a space for dialogue and reflection about the impact of ICT on education and new emergent forms of teaching and learning in digital environments. It is published half-yearly (June & December) and it includes articles in English or Spanish. ICT plays an important role in education, raising discussions and important new challenges. Analyze the impact of ICT, new forms of literacy and virtual teaching and learning are the main goals of Digital Education Review. The publication is open to all those investigators who wish to propose articles on this subject. Articles admitted include empirical investigations as well as reviews and theoretical reflections. The journal publishes different kinds of articles: Peer Review Articles: articles that have passed the blind review carried out by a group of experts Reviews: short articles about books, software or websides and PhD Guest and Invited Articles: articles approved by the Editorial Board of the journal. DER publishes issues related with its focus and scope and also monographic issues, centered on a specific subject. Both of them are subjected to a peer review process. Finally, this journal is published by the Digital Education Observatory (OED) and Virtual Teaching and Learning Research Group (GREAV) at the Universitat de Barcelona.