{"title":"The power of effect size stabilization.","authors":"Benjamin Kowialiewski","doi":"10.3758/s13428-024-02549-3","DOIUrl":null,"url":null,"abstract":"<p><p>Determining an appropriate sample size in psychological experiments is a common challenge, requiring a balance between maximizing the chance of detecting a true effect (minimizing false negatives) and minimizing the risk of observing an effect where none exists (minimizing false positives). A recent study proposes using effect size stabilization, a form of optional stopping, to define sample size without increasing the risk of false positives. In effect size stabilization, researchers monitor the effect size of their samples throughout the sampling process and stop sampling when the effect no longer varies beyond predefined thresholds. This study aims to improve our understanding of effect size stabilization properties. Simulations involving effect size stabilization are presented, with parametric modulation of the true effect in the population and the strictness of the stabilization rule. As previously demonstrated, the results indicate that optional stopping based on effect-size stabilization consistently yields unbiased samples over the long run. However, simulations also reveal that effect size stabilization does not guarantee the detection of a true effect in the population. Consequently, researchers adopting effect size stabilization put themselves at risk of increasing type 2 error probability. Instead of using effect-size stabilization procedures for testing, researchers should use them to reach accurate parameter estimates.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"7"},"PeriodicalIF":4.6000,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-024-02549-3","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Determining an appropriate sample size in psychological experiments is a common challenge, requiring a balance between maximizing the chance of detecting a true effect (minimizing false negatives) and minimizing the risk of observing an effect where none exists (minimizing false positives). A recent study proposes using effect size stabilization, a form of optional stopping, to define sample size without increasing the risk of false positives. In effect size stabilization, researchers monitor the effect size of their samples throughout the sampling process and stop sampling when the effect no longer varies beyond predefined thresholds. This study aims to improve our understanding of effect size stabilization properties. Simulations involving effect size stabilization are presented, with parametric modulation of the true effect in the population and the strictness of the stabilization rule. As previously demonstrated, the results indicate that optional stopping based on effect-size stabilization consistently yields unbiased samples over the long run. However, simulations also reveal that effect size stabilization does not guarantee the detection of a true effect in the population. Consequently, researchers adopting effect size stabilization put themselves at risk of increasing type 2 error probability. Instead of using effect-size stabilization procedures for testing, researchers should use them to reach accurate parameter estimates.
期刊介绍:
Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.