{"title":"Protocol for a systematic review of effect sizes and statistical power in the rodent fear conditioning literature","authors":"T.C. Moulin, C.F.D. Carneiro, M.R. Macleod, O.B. Amaral","doi":"10.1002/ebm2.16","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>The concepts of effect size and statistical power are often disregarded in basic neuroscience, and most articles in the field draw their conclusions solely based on the arbitrary significance thresholds of statistical inference tests. Moreover, studies are often underpowered, making conclusions from significance tests less reliable. With this in mind, we present the protocol of a systematic review to study the distribution of effect sizes and statistical power in the rodent fear conditioning literature, and to analyse how these factors influence the description and publication of results. To do this, we will conduct a search in PubMed for “fear conditioning” AND “mouse” OR “mice” OR “rat” OR “rats” and obtain all articles published online in 2013. Experiments will be included if they: (1) describe the effect(s) of a single intervention on fear conditioning acquisition or consolidation; (2) have a control group to which the experimental group is compared; (3) use freezing as a measure of conditioned fear and (4) have available data on mean freezing, standard deviation and sample size of each group and on the statistical significance of the comparison. We will use the extracted data to calculate the distribution of effect sizes in these experiments as well as the distribution of statistical power curves for detecting a range of differences at a threshold of α = 0.05. We will assess correlations between these variables and (1) the chances of a result being statistically significant, (2) the way the result is described in the article text, (3) measures to reduce risk of bias in the article and (4) the impact factor of the journal and the number of citations of the article. We will also perform analyses to see whether effect sizes vary systematically across species, gender, conditioning protocols or intervention types.</p>\n </div>","PeriodicalId":90826,"journal":{"name":"Evidence-based preclinical medicine","volume":"3 1","pages":"24-32"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1002/ebm2.16","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evidence-based preclinical medicine","FirstCategoryId":"1085","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/ebm2.16","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
The concepts of effect size and statistical power are often disregarded in basic neuroscience, and most articles in the field draw their conclusions solely based on the arbitrary significance thresholds of statistical inference tests. Moreover, studies are often underpowered, making conclusions from significance tests less reliable. With this in mind, we present the protocol of a systematic review to study the distribution of effect sizes and statistical power in the rodent fear conditioning literature, and to analyse how these factors influence the description and publication of results. To do this, we will conduct a search in PubMed for “fear conditioning” AND “mouse” OR “mice” OR “rat” OR “rats” and obtain all articles published online in 2013. Experiments will be included if they: (1) describe the effect(s) of a single intervention on fear conditioning acquisition or consolidation; (2) have a control group to which the experimental group is compared; (3) use freezing as a measure of conditioned fear and (4) have available data on mean freezing, standard deviation and sample size of each group and on the statistical significance of the comparison. We will use the extracted data to calculate the distribution of effect sizes in these experiments as well as the distribution of statistical power curves for detecting a range of differences at a threshold of α = 0.05. We will assess correlations between these variables and (1) the chances of a result being statistically significant, (2) the way the result is described in the article text, (3) measures to reduce risk of bias in the article and (4) the impact factor of the journal and the number of citations of the article. We will also perform analyses to see whether effect sizes vary systematically across species, gender, conditioning protocols or intervention types.