Sabine Meinck, Jörg-Henrik Heine, Julia Mang, Gabriel Nagy
{"title":"Bias risks in ILSA related to non-participation: evidence from a longitudinal large-scale survey in Germany (PISA Plus)","authors":"Sabine Meinck, Jörg-Henrik Heine, Julia Mang, Gabriel Nagy","doi":"10.1007/s11092-023-09422-5","DOIUrl":null,"url":null,"abstract":"<p>This study uses evidence from a longitudinal survey (PISA Plus, Germany) to examine the potential of bias in international large-scale assessments (ILSAs). In PISA Plus, participation was mandatory at the first measurement point, but voluntary at the second measurement point. The study provides evidence for relevant selection bias regarding student competencies and background variables when participation is voluntary. Sample dropout at the second measurement point was related to characteristics such as family background, achievement in mathematics, reading and science, and other student and school demographic variables at both the student and school levels, with lower performing students and those with less favorable background characteristics having higher dropout frequencies, from which higher dropout probabilities of such students can be inferred. We further contrast the possibilities for addressing non-response through weight adjustments in longitudinal surveys with those in cross-sectional surveys. Considering our results, we evaluate and confirm the validity and appropriateness of strict participation rate requirements in ILSAs. Likely magnitudes of bias in cross-sectional studies in varying scenarios are illustrated. Accordingly, if combined participation rates drop below 70%, a difference of at least one-fifth of a standard deviation in an achievement score between non-respondents and participants leads to relevant bias. When participation drops below 50%, even a very small difference (one-tenth of a standard deviation) will cause non-negligible bias. Finally, we conclude that the stringent participation rate requirements established in most ILSAs are fully valid, reasonable, and important since they ensure a relatively low risk of biased results.</p>","PeriodicalId":46725,"journal":{"name":"Educational Assessment Evaluation and Accountability","volume":"26 1","pages":""},"PeriodicalIF":2.8000,"publicationDate":"2023-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Educational Assessment Evaluation and Accountability","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1007/s11092-023-09422-5","RegionNum":3,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 0
Abstract
This study uses evidence from a longitudinal survey (PISA Plus, Germany) to examine the potential of bias in international large-scale assessments (ILSAs). In PISA Plus, participation was mandatory at the first measurement point, but voluntary at the second measurement point. The study provides evidence for relevant selection bias regarding student competencies and background variables when participation is voluntary. Sample dropout at the second measurement point was related to characteristics such as family background, achievement in mathematics, reading and science, and other student and school demographic variables at both the student and school levels, with lower performing students and those with less favorable background characteristics having higher dropout frequencies, from which higher dropout probabilities of such students can be inferred. We further contrast the possibilities for addressing non-response through weight adjustments in longitudinal surveys with those in cross-sectional surveys. Considering our results, we evaluate and confirm the validity and appropriateness of strict participation rate requirements in ILSAs. Likely magnitudes of bias in cross-sectional studies in varying scenarios are illustrated. Accordingly, if combined participation rates drop below 70%, a difference of at least one-fifth of a standard deviation in an achievement score between non-respondents and participants leads to relevant bias. When participation drops below 50%, even a very small difference (one-tenth of a standard deviation) will cause non-negligible bias. Finally, we conclude that the stringent participation rate requirements established in most ILSAs are fully valid, reasonable, and important since they ensure a relatively low risk of biased results.
期刊介绍:
The main objective of this international journal is to advance knowledge and dissemination of research on and about assessment, evaluation and accountability of all kinds and on various levels as well as in all fields of education. The journal provides readers with an understanding of the rich contextual nature of evaluation, assessment and accountability in education. The journal is theory-oriented and methodology-based and seeks to connect research, policy making and practice. The journal publishes outstanding empirical works, peer-reviewed by eminent scholars around the world.Aims and Scope in more detail: The main objective of this international journal is to advance knowledge and dissemination of research on and about evaluation, assessment and accountability: - of all kinds (e.g. person, programme, organisation), - on various levels (state, regional, local), - in all fields of education (primary, secondary, higher education/tertiary, as well as non-school sector) and across all different life phases (e.g. adult education/andragogy/Human Resource Management/professional development).The journal provides readers with an understanding of the rich contextual nature of evaluation, assessment and accountability in education. The journal is theory-oriented and methodology-based and seeks to connect research, policy making and practice. Therefore, the journal explores and discusses: - theories of evaluation, assessment and accountability, - function, role, aims and purpose of evaluation, assessment and accountability, - impact of evaluation, assessment and accountability, - methodology, design and methods of evaluation, assessment and accountability, - principles, standards and quality of evaluation, assessment and accountability, - issues of planning, coordinating, conducting, reporting of evaluation, assessment and accountability.The journal also covers the quality of different instruments or procedures or approaches which are used for evaluation, assessment and accountability.The journal only includes research findings from evaluation, assessment and accountability, if the design or approach of it is meta-reflected in the article.The journal publishes outstanding empirical works, peer-reviewed by eminent scholars around the world.