{"title":"Can Artificial Intelligence help provide more sustainable feed-back?","authors":"Eloi Puertas Prats, María Elena Cano García","doi":"10.1344/der.2024.45.50-58","DOIUrl":null,"url":null,"abstract":"\nPeer assessment is a strategy wherein students evaluate the level, value, or quality of their peers' work within the same educational setting. Research has demonstrated that peer evaluation processes positively impact skill development and academic performance. By applying evaluation criteria to their peers' work and offering comments, corrections, and suggestions for improvement, students not only enhance their own work but also cultivate critical thinking skills. To effectively nurture students' role as evaluators, deliberate and structured opportunities for practice, along with training and guidance, are essential.\n\n\nArtificial Intelligence (AI) can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.\n\nThis paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an Artificial Intelligence (AI) Large Language Model (LM) was trained to achieve automated evaluation. The findings show the similarity between human evaluation and automated evaluation, which allows expectations to be generated regarding the possibilities of AI for this purpose. The challenges and prospects of this process are discussed, along with recommendations for\noptimizing results.\n\nArtificial intelligence can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.\nThis paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an artificial intelligence Large Language Model was trained to achieve automated evaluation. The challenges and prospects of this process are discussed, along with recommendations for optimizing results.","PeriodicalId":1,"journal":{"name":"Accounts of Chemical Research","volume":null,"pages":null},"PeriodicalIF":16.4000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Accounts of Chemical Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1344/der.2024.45.50-58","RegionNum":1,"RegionCategory":"化学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"CHEMISTRY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Peer assessment is a strategy wherein students evaluate the level, value, or quality of their peers' work within the same educational setting. Research has demonstrated that peer evaluation processes positively impact skill development and academic performance. By applying evaluation criteria to their peers' work and offering comments, corrections, and suggestions for improvement, students not only enhance their own work but also cultivate critical thinking skills. To effectively nurture students' role as evaluators, deliberate and structured opportunities for practice, along with training and guidance, are essential.
Artificial Intelligence (AI) can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an Artificial Intelligence (AI) Large Language Model (LM) was trained to achieve automated evaluation. The findings show the similarity between human evaluation and automated evaluation, which allows expectations to be generated regarding the possibilities of AI for this purpose. The challenges and prospects of this process are discussed, along with recommendations for
optimizing results.
Artificial intelligence can offer a means to assess peer evaluations automatically, ensuring their quality and assisting students in executing assessments with precision. This approach allows educators to focus on evaluating student productions without necessitating specialized training in feedback evaluation.
This paper presents the process developed to automate the assessment of feedback quality. Through the utilization of feedback fragments evaluated by researchers based on pre-established criteria, an artificial intelligence Large Language Model was trained to achieve automated evaluation. The challenges and prospects of this process are discussed, along with recommendations for optimizing results.
期刊介绍:
Accounts of Chemical Research presents short, concise and critical articles offering easy-to-read overviews of basic research and applications in all areas of chemistry and biochemistry. These short reviews focus on research from the author’s own laboratory and are designed to teach the reader about a research project. In addition, Accounts of Chemical Research publishes commentaries that give an informed opinion on a current research problem. Special Issues online are devoted to a single topic of unusual activity and significance.
Accounts of Chemical Research replaces the traditional article abstract with an article "Conspectus." These entries synopsize the research affording the reader a closer look at the content and significance of an article. Through this provision of a more detailed description of the article contents, the Conspectus enhances the article's discoverability by search engines and the exposure for the research.