{"title":"Estimating the Impact of Emergency Assistance on Educational Progress for Low-Income Adults: Experimental and Nonexperimental Evidence.","authors":"Daniel Litwok","doi":"10.1177/0193841X221118454","DOIUrl":null,"url":null,"abstract":"<p><p>Methods for estimating causal impact aim to either remove or reduce bias. This study estimates the degree of bias reduction obtained from regression adjustment and propensity score methods when only a weak set of predictors are available. The study uses an experimental test of providing emergency financial assistance to participants in a job training program to estimate an experimental benchmark and compares it to nonexperimental estimates of the impact of receiving assistance. When estimating the impact of receiving assistance, those who received it constitute the treatment group. The study explores two different comparison groups: those who could have (because they were assigned to the experimental treatment group) but did not receive emergency assistance; and those who could not receive emergency assistance because they were randomly assigned to the experimental control group. It uses these groups to estimate impacts by applying three estimation strategies: unadjusted mean comparison, regression adjustment, and inverse propensity weighting. It then compares these estimates to the experimental benchmark using statistical tests recommended by the within-study comparison literature. The nonexperimental approaches to addressing selection bias suggest large positive impacts. These are statistically different from the experimental benchmark, which shows that receipt of emergency assistance does not improve educational progress. Further, over 90% of the bias from a simple comparison of means remains. Unless a stronger set of predictors are available, future evaluations of such interventions should be wary of relying on these methods for either unbiased estimation of impacts or bias reduction.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":"47 2","pages":"231-263"},"PeriodicalIF":3.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evaluation Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/0193841X221118454","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Methods for estimating causal impact aim to either remove or reduce bias. This study estimates the degree of bias reduction obtained from regression adjustment and propensity score methods when only a weak set of predictors are available. The study uses an experimental test of providing emergency financial assistance to participants in a job training program to estimate an experimental benchmark and compares it to nonexperimental estimates of the impact of receiving assistance. When estimating the impact of receiving assistance, those who received it constitute the treatment group. The study explores two different comparison groups: those who could have (because they were assigned to the experimental treatment group) but did not receive emergency assistance; and those who could not receive emergency assistance because they were randomly assigned to the experimental control group. It uses these groups to estimate impacts by applying three estimation strategies: unadjusted mean comparison, regression adjustment, and inverse propensity weighting. It then compares these estimates to the experimental benchmark using statistical tests recommended by the within-study comparison literature. The nonexperimental approaches to addressing selection bias suggest large positive impacts. These are statistically different from the experimental benchmark, which shows that receipt of emergency assistance does not improve educational progress. Further, over 90% of the bias from a simple comparison of means remains. Unless a stronger set of predictors are available, future evaluations of such interventions should be wary of relying on these methods for either unbiased estimation of impacts or bias reduction.
期刊介绍:
Evaluation Review is the forum for researchers, planners, and policy makers engaged in the development, implementation, and utilization of studies aimed at the betterment of the human condition. The Editors invite submission of papers reporting the findings of evaluation studies in such fields as child development, health, education, income security, manpower, mental health, criminal justice, and the physical and social environments. In addition, Evaluation Review will contain articles on methodological developments, discussions of the state of the art, and commentaries on issues related to the application of research results. Special features will include periodic review essays, "research briefs", and "craft reports".