{"title":"Current controversies in educational assessment","authors":"Therese N. Hopfenbeck","doi":"10.1080/0969594X.2022.2178602","DOIUrl":null,"url":null,"abstract":"As the global education community is adapting to life in a post-pandemic world, controversies in educational assessment continue to challenge researchers across countries and regions. Some of the controversies in educational assessment are linked to inequalities in the education system, and the fact that students do not have access to the same resources globally, which continues to impact them unfairly with respect to how they are assessed. Perhaps the most dramatic development in this respect is countries which continue to deny girls education, with Afghanistan as a recent example. It demonstrates how important it is to work even harder to reach the UN sustainable development goals, with aspiration for a world of peace, prosperity, and dignity where girls and women can live free from discrimination, and actively take part in education and sit exams for future higher education and careers. One of OECD’s ambitions is to provide evidence-based knowledge to policy makers about their education systems and to enhance equality for all students through their large-scale assessment studies such as PISA. Such ambition is thus dependent upon trust in the actual assessment and demands transparency in how concepts are measured and reported. In the first paper of this issue, Zieger et al. (2022) discusses the so-called ‘conditioning model’, which is part of the OECD’s Programme for International Student Assessment (PISA). The aim of the paper is to discuss this practice and use of the model, and what impact it has on the PISA results. PISA is widely used and cited globally after eight cycles of data collection in almost 100 countries, just during the first quarter of the century (Jerrim, 2023). Despite this prominence as the world’s largest and most known comparative international education study, the knowledge around how student background variables are used when deriving students’ achievement scores are less known. More specifically, in their paper, Zieger et al. (this issue) demonstrate that the conditioning model is sensitive to which background variables are included. In fact, changes to how background variables are used lead to changes in the ranking of countries and how they are compared in PISA. This was particularly the case with the variables around socioeconomic background, measures used to measure inequality on education. The authors understandably suggest this issue needs to be further addressed, both within and outside OECD, and results around comparisons of certain measures must be treated with caution. Debates around PISA and other international large-scale studies are not new, and controversial topics around calculations of scores and rankings have been an ongoing debate since the introduction of these studies (Goldstein, 2004). Nevertheless, the call for more openness around the use of different models and the impact it has on the rankings must be addressed, as such studies are dependent upon the public’s trust. ASSESSMENT IN EDUCATION: PRINCIPLES, POLICY & PRACTICE 2022, VOL. 29, NO. 6, 629–631 https://doi.org/10.1080/0969594X.2022.2178602","PeriodicalId":51515,"journal":{"name":"Assessment in Education-Principles Policy & Practice","volume":null,"pages":null},"PeriodicalIF":2.7000,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Assessment in Education-Principles Policy & Practice","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/0969594X.2022.2178602","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
As the global education community is adapting to life in a post-pandemic world, controversies in educational assessment continue to challenge researchers across countries and regions. Some of the controversies in educational assessment are linked to inequalities in the education system, and the fact that students do not have access to the same resources globally, which continues to impact them unfairly with respect to how they are assessed. Perhaps the most dramatic development in this respect is countries which continue to deny girls education, with Afghanistan as a recent example. It demonstrates how important it is to work even harder to reach the UN sustainable development goals, with aspiration for a world of peace, prosperity, and dignity where girls and women can live free from discrimination, and actively take part in education and sit exams for future higher education and careers. One of OECD’s ambitions is to provide evidence-based knowledge to policy makers about their education systems and to enhance equality for all students through their large-scale assessment studies such as PISA. Such ambition is thus dependent upon trust in the actual assessment and demands transparency in how concepts are measured and reported. In the first paper of this issue, Zieger et al. (2022) discusses the so-called ‘conditioning model’, which is part of the OECD’s Programme for International Student Assessment (PISA). The aim of the paper is to discuss this practice and use of the model, and what impact it has on the PISA results. PISA is widely used and cited globally after eight cycles of data collection in almost 100 countries, just during the first quarter of the century (Jerrim, 2023). Despite this prominence as the world’s largest and most known comparative international education study, the knowledge around how student background variables are used when deriving students’ achievement scores are less known. More specifically, in their paper, Zieger et al. (this issue) demonstrate that the conditioning model is sensitive to which background variables are included. In fact, changes to how background variables are used lead to changes in the ranking of countries and how they are compared in PISA. This was particularly the case with the variables around socioeconomic background, measures used to measure inequality on education. The authors understandably suggest this issue needs to be further addressed, both within and outside OECD, and results around comparisons of certain measures must be treated with caution. Debates around PISA and other international large-scale studies are not new, and controversial topics around calculations of scores and rankings have been an ongoing debate since the introduction of these studies (Goldstein, 2004). Nevertheless, the call for more openness around the use of different models and the impact it has on the rankings must be addressed, as such studies are dependent upon the public’s trust. ASSESSMENT IN EDUCATION: PRINCIPLES, POLICY & PRACTICE 2022, VOL. 29, NO. 6, 629–631 https://doi.org/10.1080/0969594X.2022.2178602
期刊介绍:
Recent decades have witnessed significant developments in the field of educational assessment. New approaches to the assessment of student achievement have been complemented by the increasing prominence of educational assessment as a policy issue. In particular, there has been a growth of interest in modes of assessment that promote, as well as measure, standards and quality. These have profound implications for individual learners, institutions and the educational system itself. Assessment in Education provides a focus for scholarly output in the field of assessment. The journal is explicitly international in focus and encourages contributions from a wide range of assessment systems and cultures. The journal''s intention is to explore both commonalities and differences in policy and practice.