Teresa M. Ober, Maxwell R. Hong, Matthew F. Carter, Alex S. Brodersen, Daniella A. Rebouças-Ju, Cheng Liu, Ying Cheng
{"title":"高中生预测自己的AP考试成绩准确吗?:检查学生预测的不准确性和过度自信","authors":"Teresa M. Ober, Maxwell R. Hong, Matthew F. Carter, Alex S. Brodersen, Daniella A. Rebouças-Ju, Cheng Liu, Ying Cheng","doi":"10.1080/0969594X.2022.2037508","DOIUrl":null,"url":null,"abstract":"ABSTRACT We examined whether students were accurate in predicting their test performance in both low-stakes and high-stakes testing contexts. The sample comprised U.S. high school students enrolled in an advanced placement (AP) statistics course during the 2017–2018 academic year (N = 209; Mage = 16.6 years). We found that even two months before taking the AP exam, a high stakes summative assessment, students were moderately accurate in predicting their actual scores (κweighted = .62). When the same variables were entered into models predicting inaccuracy and overconfidence bias, results did not provide evidence that age, gender, parental education, number of mathematics classes previously taken, or course engagement accounted for variation in accuracy. Overconfidence bias differed between students enrolled at different schools. Results indicated that students’ predictions of performance were positively associated with performance in both low- and high-stakes testing contexts. The findings shed light on ways to leverage students’ self-assessment for learning.","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":"124 1","pages":"27 - 50"},"PeriodicalIF":2.7000,"publicationDate":"2022-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Are high school students accurate in predicting their AP exam scores?: Examining inaccuracy and overconfidence of students’ predictions\",\"authors\":\"Teresa M. Ober, Maxwell R. Hong, Matthew F. Carter, Alex S. Brodersen, Daniella A. Rebouças-Ju, Cheng Liu, Ying Cheng\",\"doi\":\"10.1080/0969594X.2022.2037508\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT We examined whether students were accurate in predicting their test performance in both low-stakes and high-stakes testing contexts. The sample comprised U.S. high school students enrolled in an advanced placement (AP) statistics course during the 2017–2018 academic year (N = 209; Mage = 16.6 years). We found that even two months before taking the AP exam, a high stakes summative assessment, students were moderately accurate in predicting their actual scores (κweighted = .62). When the same variables were entered into models predicting inaccuracy and overconfidence bias, results did not provide evidence that age, gender, parental education, number of mathematics classes previously taken, or course engagement accounted for variation in accuracy. Overconfidence bias differed between students enrolled at different schools. Results indicated that students’ predictions of performance were positively associated with performance in both low- and high-stakes testing contexts. The findings shed light on ways to leverage students’ self-assessment for learning.\",\"PeriodicalId\":51515,\"journal\":{\"name\":\"Assessment in Education-Principles Policy & Practice\",\"volume\":\"124 1\",\"pages\":\"27 - 50\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2022-01-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Assessment in Education-Principles Policy & Practice\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1080/0969594X.2022.2037508\",\"RegionNum\":3,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessment in Education-Principles Policy & Practice","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/0969594X.2022.2037508","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Are high school students accurate in predicting their AP exam scores?: Examining inaccuracy and overconfidence of students’ predictions
ABSTRACT We examined whether students were accurate in predicting their test performance in both low-stakes and high-stakes testing contexts. The sample comprised U.S. high school students enrolled in an advanced placement (AP) statistics course during the 2017–2018 academic year (N = 209; Mage = 16.6 years). We found that even two months before taking the AP exam, a high stakes summative assessment, students were moderately accurate in predicting their actual scores (κweighted = .62). When the same variables were entered into models predicting inaccuracy and overconfidence bias, results did not provide evidence that age, gender, parental education, number of mathematics classes previously taken, or course engagement accounted for variation in accuracy. Overconfidence bias differed between students enrolled at different schools. Results indicated that students’ predictions of performance were positively associated with performance in both low- and high-stakes testing contexts. The findings shed light on ways to leverage students’ self-assessment for learning.
期刊介绍:
Recent decades have witnessed significant developments in the field of educational assessment. New approaches to the assessment of student achievement have been complemented by the increasing prominence of educational assessment as a policy issue. In particular, there has been a growth of interest in modes of assessment that promote, as well as measure, standards and quality. These have profound implications for individual learners, institutions and the educational system itself. Assessment in Education provides a focus for scholarly output in the field of assessment. The journal is explicitly international in focus and encourages contributions from a wide range of assessment systems and cultures. The journal''s intention is to explore both commonalities and differences in policy and practice.