{"title":"使用研究内比较逻辑的回归点位移设计的实证验证:新出现的可能性和注意事项。","authors":"Joshua Hendrickse, William H Yeaton","doi":"10.1177/0193841X211064420","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The regression point displacement (RPD) design is a quasi-experiment (QE) that aims to control many threats to internal validity. Though it has existed for several decades, RPD has only recently begun to answer applied research questions in lieu of stronger QEs.</p><p><strong>Objectives: </strong>Our primary objective was to implement within-study comparison (WSC) logic to create RPD replicates and to determine conditions under which RPD might provide estimates comparable to those found in validating experiments.</p><p><strong>Research design: </strong>We utilize three randomized controlled trials (two cluster-level, one individual-level), artificially decomposing or creating cluster structures, to create multiple RPDs. We compare results in each RPD treatment group to a fixed set of control groups to gauge the congruence of these repeated RPD realizations with results found in these three RCTs.</p><p><strong>Results: </strong>RPD's performance was uneven. Using multiple criteria, we found that RPDs successfully predicted the direction of the RCT's intervention effect but inconsistently fell within the .10 SD threshold. A scant 13% of RPD results were statistically significant at either the .05 or .01 alpha-level. RPD results were within the 95% confidence interval of RCTs around half the time, and false negative rates were substantially higher than false positive rates.</p><p><strong>Conclusions: </strong>RPD consistently underestimates treatment effects in validating RCTs. We analyze reasons for this insensitivity and offer practical suggestions to improve the chances RPD will correctly identify favorable results. We note that the synthetic, \"decomposition of cluster RCTs,\" WSC design represents a prototype for evaluating other QEs.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":"45 6","pages":"279-308"},"PeriodicalIF":3.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Empirical Validation of the Regression Point Displacement Design Using Within-Study Comparison Logic: Emerging Possibilities and Cautions.\",\"authors\":\"Joshua Hendrickse, William H Yeaton\",\"doi\":\"10.1177/0193841X211064420\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>The regression point displacement (RPD) design is a quasi-experiment (QE) that aims to control many threats to internal validity. Though it has existed for several decades, RPD has only recently begun to answer applied research questions in lieu of stronger QEs.</p><p><strong>Objectives: </strong>Our primary objective was to implement within-study comparison (WSC) logic to create RPD replicates and to determine conditions under which RPD might provide estimates comparable to those found in validating experiments.</p><p><strong>Research design: </strong>We utilize three randomized controlled trials (two cluster-level, one individual-level), artificially decomposing or creating cluster structures, to create multiple RPDs. We compare results in each RPD treatment group to a fixed set of control groups to gauge the congruence of these repeated RPD realizations with results found in these three RCTs.</p><p><strong>Results: </strong>RPD's performance was uneven. Using multiple criteria, we found that RPDs successfully predicted the direction of the RCT's intervention effect but inconsistently fell within the .10 SD threshold. A scant 13% of RPD results were statistically significant at either the .05 or .01 alpha-level. RPD results were within the 95% confidence interval of RCTs around half the time, and false negative rates were substantially higher than false positive rates.</p><p><strong>Conclusions: </strong>RPD consistently underestimates treatment effects in validating RCTs. We analyze reasons for this insensitivity and offer practical suggestions to improve the chances RPD will correctly identify favorable results. We note that the synthetic, \\\"decomposition of cluster RCTs,\\\" WSC design represents a prototype for evaluating other QEs.</p>\",\"PeriodicalId\":47533,\"journal\":{\"name\":\"Evaluation Review\",\"volume\":\"45 6\",\"pages\":\"279-308\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evaluation Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/0193841X211064420\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2022/1/3 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evaluation Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/0193841X211064420","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/1/3 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
An Empirical Validation of the Regression Point Displacement Design Using Within-Study Comparison Logic: Emerging Possibilities and Cautions.
Background: The regression point displacement (RPD) design is a quasi-experiment (QE) that aims to control many threats to internal validity. Though it has existed for several decades, RPD has only recently begun to answer applied research questions in lieu of stronger QEs.
Objectives: Our primary objective was to implement within-study comparison (WSC) logic to create RPD replicates and to determine conditions under which RPD might provide estimates comparable to those found in validating experiments.
Research design: We utilize three randomized controlled trials (two cluster-level, one individual-level), artificially decomposing or creating cluster structures, to create multiple RPDs. We compare results in each RPD treatment group to a fixed set of control groups to gauge the congruence of these repeated RPD realizations with results found in these three RCTs.
Results: RPD's performance was uneven. Using multiple criteria, we found that RPDs successfully predicted the direction of the RCT's intervention effect but inconsistently fell within the .10 SD threshold. A scant 13% of RPD results were statistically significant at either the .05 or .01 alpha-level. RPD results were within the 95% confidence interval of RCTs around half the time, and false negative rates were substantially higher than false positive rates.
Conclusions: RPD consistently underestimates treatment effects in validating RCTs. We analyze reasons for this insensitivity and offer practical suggestions to improve the chances RPD will correctly identify favorable results. We note that the synthetic, "decomposition of cluster RCTs," WSC design represents a prototype for evaluating other QEs.
期刊介绍:
Evaluation Review is the forum for researchers, planners, and policy makers engaged in the development, implementation, and utilization of studies aimed at the betterment of the human condition. The Editors invite submission of papers reporting the findings of evaluation studies in such fields as child development, health, education, income security, manpower, mental health, criminal justice, and the physical and social environments. In addition, Evaluation Review will contain articles on methodological developments, discussions of the state of the art, and commentaries on issues related to the application of research results. Special features will include periodic review essays, "research briefs", and "craft reports".