How Accurate Are Our Students? A Meta-analytic Systematic Review on Self-assessment Scoring Accuracy

IF 10.1 1区 心理学 Q1 PSYCHOLOGY, EDUCATIONAL
Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez
{"title":"How Accurate Are Our Students? A Meta-analytic Systematic Review on Self-assessment Scoring Accuracy","authors":"Samuel P. León, Ernesto Panadero, Inmaculada García-Martínez","doi":"10.1007/s10648-023-09819-0","DOIUrl":null,"url":null,"abstract":"<p>Developing the ability to self-assess is a crucial skill for students, as it impacts their academic performance and learning strategies, amongst other areas. Most existing research in this field has concentrated on the exploration of the students’ capacity to accurately assign a score to their work that closely mirrors an expert’s evaluation, typically a teacher’s. Though this process is commonly referred to as self-assessment, a more precise term would be self-assessment scoring accuracy. Our aim is to review what is the average accuracy and what moderators might influence this accuracy. Following PRISMA recommendations, we reviewed 160 articles, including data from 29,352 participants. We analysed 9 factors as possible moderators: (1) assessment criteria; (2) use of rubric; (3) self-assessment experience; (4) feedback; (5) content knowledge; (6) incentive; (7) formative assessment; (8) field of knowledge; and (9) educational level. The results showed an overall effect of students’ overestimation (<i>g</i> = 0.206) with an average relationship of <i>z</i> = 0.472 between students’ estimation and the expert’s measure. The overestimation diminishes when students receive feedback, possess greater self-assessment experience and content knowledge, when the assessment does not have formative purposes, and in younger students (primary and secondary education). Importantly, the studies analysed exhibited significant heterogeneity and lacked crucial methodological information. </p>","PeriodicalId":48344,"journal":{"name":"Educational Psychology Review","volume":"73 20","pages":""},"PeriodicalIF":10.1000,"publicationDate":"2023-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Psychology Review","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1007/s10648-023-09819-0","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EDUCATIONAL","Score":null,"Total":0}
引用次数: 0

Abstract

Developing the ability to self-assess is a crucial skill for students, as it impacts their academic performance and learning strategies, amongst other areas. Most existing research in this field has concentrated on the exploration of the students’ capacity to accurately assign a score to their work that closely mirrors an expert’s evaluation, typically a teacher’s. Though this process is commonly referred to as self-assessment, a more precise term would be self-assessment scoring accuracy. Our aim is to review what is the average accuracy and what moderators might influence this accuracy. Following PRISMA recommendations, we reviewed 160 articles, including data from 29,352 participants. We analysed 9 factors as possible moderators: (1) assessment criteria; (2) use of rubric; (3) self-assessment experience; (4) feedback; (5) content knowledge; (6) incentive; (7) formative assessment; (8) field of knowledge; and (9) educational level. The results showed an overall effect of students’ overestimation (g = 0.206) with an average relationship of z = 0.472 between students’ estimation and the expert’s measure. The overestimation diminishes when students receive feedback, possess greater self-assessment experience and content knowledge, when the assessment does not have formative purposes, and in younger students (primary and secondary education). Importantly, the studies analysed exhibited significant heterogeneity and lacked crucial methodological information.

我们的学生有多准确?自评评分准确性的元分析系统评价
培养自我评估能力对学生来说是一项至关重要的技能,因为它会影响他们的学习成绩和学习策略等。该领域现有的大多数研究都集中在探索学生准确地为自己的作品打分的能力,该能力与专家(通常是教师)的评估非常相似。尽管这一过程通常被称为自我评估,但一个更准确的术语是自我评估评分的准确性。我们的目的是审查什么是平均准确度,以及哪些调节因素可能影响这种准确度。根据PRISMA的建议,我们审查了160篇文章,包括来自29352名参与者的数据。我们分析了9个可能的调节因素:(1)评估标准;(2) 使用量规;(3) 自我评估经验;(4) 反馈;(5) 内容知识;(6) 激励;(7) 形成性评价;(8) 知识领域;(9)教育水平。结果显示,学生高估(g = 0.206),具有z的平均关系 = 学生的估计与专家的测量之间为0.472。当学生收到反馈、拥有更多的自我评估经验和内容知识、评估不具有形成性目的以及年龄较小的学生(小学和中学教育)时,高估就会减少。重要的是,所分析的研究显示出显著的异质性,并且缺乏关键的方法学信息。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Educational Psychology Review
Educational Psychology Review PSYCHOLOGY, EDUCATIONAL-
CiteScore
15.70
自引率
3.00%
发文量
62
期刊介绍: Educational Psychology Review aims to disseminate knowledge and promote dialogue within the field of educational psychology. It serves as a platform for the publication of various types of articles, including peer-reviewed integrative reviews, special thematic issues, reflections on previous research or new research directions, interviews, and research-based advice for practitioners. The journal caters to a diverse readership, ranging from generalists in educational psychology to experts in specific areas of the discipline. The content offers a comprehensive coverage of topics and provides in-depth information to meet the needs of both specialized researchers and practitioners.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信