Joseph M. Kush, Elise T. Pas, R. Musci, Catherine P. Bradshaw
{"title":"观察有效性研究的协变量平衡:匹配和加权的比较","authors":"Joseph M. Kush, Elise T. Pas, R. Musci, Catherine P. Bradshaw","doi":"10.1080/19345747.2022.2110545","DOIUrl":null,"url":null,"abstract":"Abstract Propensity score matching and weighting methods are often used in observational effectiveness studies to reduce imbalance between treated and untreated groups on a set of potential confounders. However, much of the prior methodological literature on matching and weighting has yet to examine performance for scenarios with a majority of treated units, as is often encountered with programs and interventions that have been widely disseminated or “scaled-up.” Using a series of Monte Carlo simulations, we compare the performance of k:1 matching with replacement and weighting methods with respect to covariate balance, bias, and mean squared error. Results indicate that the accuracy of all methods declined as treatment prevalence increased. While weighting produced the largest reduction in covariate imbalance, 1:1 matching with replacement provided the most unbiased treatment effect estimates. An applied example using empirical school-level data is provided to further illustrate the application and interpretation of these methods to a real-world scale-up effort. We conclude by considering the implications of propensity score methods for observational effectiveness studies with a particular focus on educational research.","PeriodicalId":47260,"journal":{"name":"Journal of Research on Educational Effectiveness","volume":null,"pages":null},"PeriodicalIF":1.7000,"publicationDate":"2022-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Covariate Balance for Observational Effectiveness Studies: A Comparison of Matching and Weighting\",\"authors\":\"Joseph M. Kush, Elise T. Pas, R. Musci, Catherine P. Bradshaw\",\"doi\":\"10.1080/19345747.2022.2110545\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Propensity score matching and weighting methods are often used in observational effectiveness studies to reduce imbalance between treated and untreated groups on a set of potential confounders. However, much of the prior methodological literature on matching and weighting has yet to examine performance for scenarios with a majority of treated units, as is often encountered with programs and interventions that have been widely disseminated or “scaled-up.” Using a series of Monte Carlo simulations, we compare the performance of k:1 matching with replacement and weighting methods with respect to covariate balance, bias, and mean squared error. Results indicate that the accuracy of all methods declined as treatment prevalence increased. While weighting produced the largest reduction in covariate imbalance, 1:1 matching with replacement provided the most unbiased treatment effect estimates. An applied example using empirical school-level data is provided to further illustrate the application and interpretation of these methods to a real-world scale-up effort. We conclude by considering the implications of propensity score methods for observational effectiveness studies with a particular focus on educational research.\",\"PeriodicalId\":47260,\"journal\":{\"name\":\"Journal of Research on Educational Effectiveness\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2022-09-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Research on Educational Effectiveness\",\"FirstCategoryId\":\"95\",\"ListUrlMain\":\"https://doi.org/10.1080/19345747.2022.2110545\",\"RegionNum\":4,\"RegionCategory\":\"教育学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Research on Educational Effectiveness","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/19345747.2022.2110545","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Covariate Balance for Observational Effectiveness Studies: A Comparison of Matching and Weighting
Abstract Propensity score matching and weighting methods are often used in observational effectiveness studies to reduce imbalance between treated and untreated groups on a set of potential confounders. However, much of the prior methodological literature on matching and weighting has yet to examine performance for scenarios with a majority of treated units, as is often encountered with programs and interventions that have been widely disseminated or “scaled-up.” Using a series of Monte Carlo simulations, we compare the performance of k:1 matching with replacement and weighting methods with respect to covariate balance, bias, and mean squared error. Results indicate that the accuracy of all methods declined as treatment prevalence increased. While weighting produced the largest reduction in covariate imbalance, 1:1 matching with replacement provided the most unbiased treatment effect estimates. An applied example using empirical school-level data is provided to further illustrate the application and interpretation of these methods to a real-world scale-up effort. We conclude by considering the implications of propensity score methods for observational effectiveness studies with a particular focus on educational research.
期刊介绍:
As the flagship publication for the Society for Research on Educational Effectiveness, the Journal of Research on Educational Effectiveness (JREE) publishes original articles from the multidisciplinary community of researchers who are committed to applying principles of scientific inquiry to the study of educational problems. Articles published in JREE should advance our knowledge of factors important for educational success and/or improve our ability to conduct further disciplined studies of pressing educational problems. JREE welcomes manuscripts that fit into one of the following categories: (1) intervention, evaluation, and policy studies; (2) theory, contexts, and mechanisms; and (3) methodological studies. The first category includes studies that focus on process and implementation and seek to demonstrate causal claims in educational research. The second category includes meta-analyses and syntheses, descriptive studies that illuminate educational conditions and contexts, and studies that rigorously investigate education processes and mechanism. The third category includes studies that advance our understanding of theoretical and technical features of measurement and research design and describe advances in data analysis and data modeling. To establish a stronger connection between scientific evidence and educational practice, studies submitted to JREE should focus on pressing problems found in classrooms and schools. Studies that help advance our understanding and demonstrate effectiveness related to challenges in reading, mathematics education, and science education are especially welcome as are studies related to cognitive functions, social processes, organizational factors, and cultural features that mediate and/or moderate critical educational outcomes. On occasion, invited responses to JREE articles and rejoinders to those responses will be included in an issue.