有效的统计对于强化一篇关于大脑和行为的论文是必不可少的

IF 2.7 3区 心理学 Q2 BEHAVIORAL SCIENCES
Michal Ordak, Raffaella Bosurgi
{"title":"有效的统计对于强化一篇关于大脑和行为的论文是必不可少的","authors":"Michal Ordak,&nbsp;Raffaella Bosurgi","doi":"10.1002/brb3.70727","DOIUrl":null,"url":null,"abstract":"<p>Brain and Behavior, launched in 2011 by Andrei V. Alexandrov, is an open-access, multidisciplinary journal dedicated to the rapid publication of original research in neurology, neuroscience, psychology, and psychiatry. Since then, the journal has expanded its scope and today it publishes studies on brain function and behavior in relation to health, social, political systems, and the environment. It accepts research in clinical and basic sciences, including descriptive studies, preliminary research, and work that involves reproducibility. The goal is to support authors in publishing research while meeting high-publication standards. Brain and Behavior's philosophy is to disseminate good and sound science. A study to be good doesn't have to be novel or ground breaking but it has to be robust and has to follow rigour in methodology and statistics. Demonstration that a result can be replicated in independent studies is key in validating novel studies and good statistics can enhance the overall impact of the studies (Alexandrov <span>2011</span>). Statistics plays three important roles in brain studies. They include the study of differences between brains in distinctive populations, the study of variability in the structure and functioning of the brain, and the study of data reduction on large-scale brain data. These areas are illustrated through research in brain connectivity, information flow, large-scale neuroimaging data, and predictive modeling. After outlining these roles, one can consider how statistical science continues to support brain decoding and how collaboration between statistical and neurobiological communities may contribute to addressing future questions (Chén <span>2019</span>). The most recent data suggest that we are dealing with low quality in statistical reporting (Ordak <span>2024</span>). By providing a set of basic statistical recommendations we aim to guide authors to help improve the quality and transparency of future submissions to Brain and Behavior.</p><p>One key recommendation concerns effect size measures. A <i>p</i> value indicates whether an observed effect is unlikely due to chance, but it does not communicate the strength or importance of the effect. Effect sizes allow researchers to determine how large a difference or association is and whether it may be practically or clinically meaningful. Including them supports interpretability and enables comparisons between studies. An example can be found in a study published in Brain and Behavior by Elkjaery et al., where the authors tested the emotional impact of sequentially induced anger and anxiety—emotions associated with opposing action tendencies (approach vs. avoidance) using independent samples <i>t</i>-tests and mixed-effects ANOVAs. Emotional impact in this context refers to changes in emotional states (e.g., anger, anxiety, and irritability) and behavioral inclinations, assessed through visual and verbal measures of action tendencies, following experimental manipulations designed to test the incompatible response hypothesis. They examined associations between outcome measures through correlation analysis and reported effect sizes using Pearson's <i>r</i>, Cohen's <i>d</i>, and partial eta squared. They interpreted these values based on the thresholds proposed by Cohen for small, medium, and large effects (Elkjaer et al. <span>2023</span>). As Jacob Cohen emphasized, “The primary product of a research inquiry is one or more measures of effect size, not <i>p</i> values” (Cohen <span>1990</span>). This is because effect size measures provide information about the magnitude and practical significance of a finding, helping readers to understand whether a result is not only statistically significant but also meaningful in real-world or clinical contexts. In contrast, <i>p</i> values alone do not convey how strong or relevant an effect is.</p><p>Another recommendation concerns the choice of appropriate post-hoc tests in the context of multiple comparisons. Selecting the correct post-hoc procedure depends on assumptions such as distribution type and homogeneity of variance. A common mistake is applying a post-hoc test without verifying these assumptions, which can lead to significantly different outcomes. A well-executed example can be found in a study published in Nature Communications on developmental mechanisms in the embryonic brain. For comparisons involving parametric data, they used one-way ANOVA followed by Tukey's test. For nonparametric data, they applied the Kruskal–Wallis test followed by Dunn's test. They clearly stated which tests were used under which conditions, and their rationale reflected statistical appropriateness, such as variance homogeneity (László et al. <span>2019</span>).</p><p>Testing and reporting statistical assumptions is also an important aspect to consider. These include, for example, assumptions such as normality, sphericity, homogeneity of variance, equal group sizes, and the absence of multicollinearity among predictors. One of the most frequent mistakes is using parametric tests even when the assumptions are visibly violated. It is therefore recommended not only to state which tests were used, but also to explain how their assumptions were evaluated and whether they were met. A good example of such proper reporting appears in a study by Grisoni et al. in Journal of Neuroscience, examining semantic processing during language comprehension and production. The authors reported key information for ANOVA analyses, including <i>F</i>-values, degrees of freedom, <i>p</i> values, and effect sizes. They checked for violations of sphericity, and when necessary, applied the Greenhouse–Geisser correction, explicitly reporting the correction values (Grisoni et al. <span>2024</span>). This approach offers clarity about how assumption violations were addressed.</p><p>Statistical outcomes can be influenced by the role of outliers which can change effect estimates, or distort conclusions. This is particularly relevant in correlation analyses. In such contexts, a single data point that deviates from the general distribution can inflate the correlation coefficient, create an artificial sense of association, or even reverse the direction of the correlation—from negative to positive or vice versa. For this reason, authors should analyze the influence of outliers and report how they were handled during statistical analysis (Makin and Orban de Xivry <span>2019</span>). Do we have a good example in Brain and Behavior of how the role of outliers?</p><p>It is worth to mention also some aspects related to meta-analysis. For instance, it is essential to remember that the <i>I</i>-squared value is not an absolute measure of heterogeneity. Instead, it quantifies the proportion of total variability that is due to heterogeneity between studies rather than random error. Furthermore, one more recommendation concerns the analysis of interaction effects, such as those encountered in ANOVA or regression. It is not sufficient to simply state that an interaction was tested. Authors should explain how the interaction was analyzed and interpreted, including whether any follow-up tests or decompositions were carried out. It is also advisable to pay attention to the choice of descriptive statistics. In nonparametric contexts, it is not appropriate to report only means and standard deviations. Instead, authors should use statistics such as the median and interquartile range to more accurately reflect the distributional characteristics of the data.</p><p>Handling of missing data deserves also a particular attention. As a matter of fact, it should be clearly described, including whether observations were excluded, imputed, or handled using specific statistical models. The chosen method should be explained so that readers can properly assess its influence on the results. Clear reporting of statistical tests is also crucial. Authors should specify which test was used and where it was applied. For example, a notation like “<i>U</i> = 12; <i>p</i> = 0.03” enables readers to understand precisely which test was conducted and to evaluate the appropriateness and strength of the result. Clarifying the purpose of each analysis—what specific hypothesis or research question each test was meant to address, is crucial. Unfortunately, it is still common to see lists of tests without any explanation of their role in the analysis.</p><p>Last but not least, when using advanced statistical methods, authors should briefly explain why these approaches were selected and what they involve in general terms. This will allow readers to follow the logic of the analysis, which is particularly important given that even basic statistical concepts are sometimes poorly understood. For advanced methods, such explanation becomes even more critical (Ordak et al. <span>2024</span>; Ordak <span>2022</span>).</p><p>We hope that these basic statistical recommendations will guide authors in preparing their papers and to help them focusing on key statistical considerations before submitting their work to Brain and Behavior.</p><p><b>Michal Ordak</b>: conceptualization, methodology, writing – review and editing, writing – original draft, visualization. <b>Raffaella Bosurgi</b>: writing – review and editing, conceptualization.</p><p>The authors declare no conflicts of interest.</p>","PeriodicalId":9081,"journal":{"name":"Brain and Behavior","volume":"15 9","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/brb3.70727","citationCount":"0","resultStr":"{\"title\":\"Effective Statistics Are Essential for Strengthening a Paper in Brain and Behavior\",\"authors\":\"Michal Ordak,&nbsp;Raffaella Bosurgi\",\"doi\":\"10.1002/brb3.70727\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Brain and Behavior, launched in 2011 by Andrei V. Alexandrov, is an open-access, multidisciplinary journal dedicated to the rapid publication of original research in neurology, neuroscience, psychology, and psychiatry. Since then, the journal has expanded its scope and today it publishes studies on brain function and behavior in relation to health, social, political systems, and the environment. It accepts research in clinical and basic sciences, including descriptive studies, preliminary research, and work that involves reproducibility. The goal is to support authors in publishing research while meeting high-publication standards. Brain and Behavior's philosophy is to disseminate good and sound science. A study to be good doesn't have to be novel or ground breaking but it has to be robust and has to follow rigour in methodology and statistics. Demonstration that a result can be replicated in independent studies is key in validating novel studies and good statistics can enhance the overall impact of the studies (Alexandrov <span>2011</span>). Statistics plays three important roles in brain studies. They include the study of differences between brains in distinctive populations, the study of variability in the structure and functioning of the brain, and the study of data reduction on large-scale brain data. These areas are illustrated through research in brain connectivity, information flow, large-scale neuroimaging data, and predictive modeling. After outlining these roles, one can consider how statistical science continues to support brain decoding and how collaboration between statistical and neurobiological communities may contribute to addressing future questions (Chén <span>2019</span>). The most recent data suggest that we are dealing with low quality in statistical reporting (Ordak <span>2024</span>). By providing a set of basic statistical recommendations we aim to guide authors to help improve the quality and transparency of future submissions to Brain and Behavior.</p><p>One key recommendation concerns effect size measures. A <i>p</i> value indicates whether an observed effect is unlikely due to chance, but it does not communicate the strength or importance of the effect. Effect sizes allow researchers to determine how large a difference or association is and whether it may be practically or clinically meaningful. Including them supports interpretability and enables comparisons between studies. An example can be found in a study published in Brain and Behavior by Elkjaery et al., where the authors tested the emotional impact of sequentially induced anger and anxiety—emotions associated with opposing action tendencies (approach vs. avoidance) using independent samples <i>t</i>-tests and mixed-effects ANOVAs. Emotional impact in this context refers to changes in emotional states (e.g., anger, anxiety, and irritability) and behavioral inclinations, assessed through visual and verbal measures of action tendencies, following experimental manipulations designed to test the incompatible response hypothesis. They examined associations between outcome measures through correlation analysis and reported effect sizes using Pearson's <i>r</i>, Cohen's <i>d</i>, and partial eta squared. They interpreted these values based on the thresholds proposed by Cohen for small, medium, and large effects (Elkjaer et al. <span>2023</span>). As Jacob Cohen emphasized, “The primary product of a research inquiry is one or more measures of effect size, not <i>p</i> values” (Cohen <span>1990</span>). This is because effect size measures provide information about the magnitude and practical significance of a finding, helping readers to understand whether a result is not only statistically significant but also meaningful in real-world or clinical contexts. In contrast, <i>p</i> values alone do not convey how strong or relevant an effect is.</p><p>Another recommendation concerns the choice of appropriate post-hoc tests in the context of multiple comparisons. Selecting the correct post-hoc procedure depends on assumptions such as distribution type and homogeneity of variance. A common mistake is applying a post-hoc test without verifying these assumptions, which can lead to significantly different outcomes. A well-executed example can be found in a study published in Nature Communications on developmental mechanisms in the embryonic brain. For comparisons involving parametric data, they used one-way ANOVA followed by Tukey's test. For nonparametric data, they applied the Kruskal–Wallis test followed by Dunn's test. They clearly stated which tests were used under which conditions, and their rationale reflected statistical appropriateness, such as variance homogeneity (László et al. <span>2019</span>).</p><p>Testing and reporting statistical assumptions is also an important aspect to consider. These include, for example, assumptions such as normality, sphericity, homogeneity of variance, equal group sizes, and the absence of multicollinearity among predictors. One of the most frequent mistakes is using parametric tests even when the assumptions are visibly violated. It is therefore recommended not only to state which tests were used, but also to explain how their assumptions were evaluated and whether they were met. A good example of such proper reporting appears in a study by Grisoni et al. in Journal of Neuroscience, examining semantic processing during language comprehension and production. The authors reported key information for ANOVA analyses, including <i>F</i>-values, degrees of freedom, <i>p</i> values, and effect sizes. They checked for violations of sphericity, and when necessary, applied the Greenhouse–Geisser correction, explicitly reporting the correction values (Grisoni et al. <span>2024</span>). This approach offers clarity about how assumption violations were addressed.</p><p>Statistical outcomes can be influenced by the role of outliers which can change effect estimates, or distort conclusions. This is particularly relevant in correlation analyses. In such contexts, a single data point that deviates from the general distribution can inflate the correlation coefficient, create an artificial sense of association, or even reverse the direction of the correlation—from negative to positive or vice versa. For this reason, authors should analyze the influence of outliers and report how they were handled during statistical analysis (Makin and Orban de Xivry <span>2019</span>). Do we have a good example in Brain and Behavior of how the role of outliers?</p><p>It is worth to mention also some aspects related to meta-analysis. For instance, it is essential to remember that the <i>I</i>-squared value is not an absolute measure of heterogeneity. Instead, it quantifies the proportion of total variability that is due to heterogeneity between studies rather than random error. Furthermore, one more recommendation concerns the analysis of interaction effects, such as those encountered in ANOVA or regression. It is not sufficient to simply state that an interaction was tested. Authors should explain how the interaction was analyzed and interpreted, including whether any follow-up tests or decompositions were carried out. It is also advisable to pay attention to the choice of descriptive statistics. In nonparametric contexts, it is not appropriate to report only means and standard deviations. Instead, authors should use statistics such as the median and interquartile range to more accurately reflect the distributional characteristics of the data.</p><p>Handling of missing data deserves also a particular attention. As a matter of fact, it should be clearly described, including whether observations were excluded, imputed, or handled using specific statistical models. The chosen method should be explained so that readers can properly assess its influence on the results. Clear reporting of statistical tests is also crucial. Authors should specify which test was used and where it was applied. For example, a notation like “<i>U</i> = 12; <i>p</i> = 0.03” enables readers to understand precisely which test was conducted and to evaluate the appropriateness and strength of the result. Clarifying the purpose of each analysis—what specific hypothesis or research question each test was meant to address, is crucial. Unfortunately, it is still common to see lists of tests without any explanation of their role in the analysis.</p><p>Last but not least, when using advanced statistical methods, authors should briefly explain why these approaches were selected and what they involve in general terms. This will allow readers to follow the logic of the analysis, which is particularly important given that even basic statistical concepts are sometimes poorly understood. For advanced methods, such explanation becomes even more critical (Ordak et al. <span>2024</span>; Ordak <span>2022</span>).</p><p>We hope that these basic statistical recommendations will guide authors in preparing their papers and to help them focusing on key statistical considerations before submitting their work to Brain and Behavior.</p><p><b>Michal Ordak</b>: conceptualization, methodology, writing – review and editing, writing – original draft, visualization. <b>Raffaella Bosurgi</b>: writing – review and editing, conceptualization.</p><p>The authors declare no conflicts of interest.</p>\",\"PeriodicalId\":9081,\"journal\":{\"name\":\"Brain and Behavior\",\"volume\":\"15 9\",\"pages\":\"\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2025-09-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1002/brb3.70727\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Brain and Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/brb3.70727\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"BEHAVIORAL SCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain and Behavior","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/brb3.70727","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

《大脑与行为》于2011年由安德烈·v·亚历山德罗夫创办,是一本开放获取的多学科期刊,致力于快速发表神经学、神经科学、心理学和精神病学方面的原创研究。从那时起,该杂志扩大了其范围,今天它发表与健康、社会、政治制度和环境有关的大脑功能和行为的研究。它接受临床和基础科学方面的研究,包括描述性研究、初步研究和涉及可重复性的工作。其目标是在满足高出版标准的同时支持作者发表研究。《大脑与行为》的哲学是传播良好而健全的科学。一项好的研究不一定是新颖的或突破性的,但它必须是强大的,必须遵循严格的方法和统计。证明结果可以在独立研究中重复是验证新研究的关键,良好的统计数据可以增强研究的整体影响(Alexandrov 2011)。统计学在大脑研究中起着三个重要作用。它们包括研究不同人群大脑之间的差异,研究大脑结构和功能的可变性,以及研究大规模大脑数据的数据缩减。这些领域通过对大脑连接、信息流、大规模神经成像数据和预测建模的研究来说明。在概述了这些角色之后,人们可以考虑统计科学如何继续支持大脑解码,以及统计和神经生物学社区之间的合作如何有助于解决未来的问题(ch2013.2019)。最近的数据表明,我们正在处理统计报告的低质量问题(Ordak 2024)。通过提供一套基本的统计建议,我们旨在指导作者帮助提高未来提交给Brain and Behavior的文章的质量和透明度。一个关键的建议涉及效应大小测量。p值表示观察到的效应是否不太可能是偶然的,但它并不能传达该效应的强度或重要性。效应量使研究人员能够确定差异或关联有多大,以及它是否具有实际或临床意义。包括它们支持可解释性,并使研究之间能够进行比较。Elkjaery等人在《大脑与行为》上发表的一项研究中可以找到一个例子,作者使用独立样本t检验和混合效应方差分析测试了顺序诱导的愤怒和焦虑的情绪影响,这些情绪与相反的行动倾向(接近与回避)有关。在这种情况下,情绪影响是指情绪状态(如愤怒、焦虑和易怒)和行为倾向的变化,通过视觉和语言的行动倾向测量来评估,随后设计实验操作来测试不相容反应假设。他们通过相关性分析和使用Pearson’s r、Cohen’s d和偏平方检验了结果测量之间的关联。他们根据Cohen提出的小、中、大效应阈值来解释这些值(Elkjaer et al. 2023)。正如Jacob Cohen所强调的,“研究调查的主要产品是一个或多个效应大小的测量,而不是p值”(Cohen 1990)。这是因为效应量测量提供了关于发现的大小和实际意义的信息,帮助读者了解结果是否不仅在统计上显着,而且在现实世界或临床环境中也有意义。相反,p值本身并不能传达效果的强度或相关性。另一项建议涉及在多重比较的情况下选择适当的事后检验。选择正确的事后程序取决于分布类型和方差同质性等假设。一个常见的错误是在没有验证这些假设的情况下应用事后测试,这可能导致明显不同的结果。在《自然通讯》上发表的一篇关于胚胎大脑发育机制的研究中,可以找到一个执行得很好的例子。对于涉及参数数据的比较,他们使用了单向方差分析,然后进行了Tukey检验。对于非参数数据,他们应用了Kruskal-Wallis检验,然后是Dunn检验。他们清楚地说明了在哪些条件下使用哪些测试,其基本原理反映了统计适当性,例如方差同质性(László et al. 2019)。检验和报告统计假设也是需要考虑的一个重要方面。例如,这些假设包括正态性、球形性、方差的同质性、相等的群体规模以及预测因子之间不存在多重共线性。最常见的错误之一是使用参数检验,即使假设明显违反。 因此,建议不仅要说明使用了哪些检验,而且要说明如何评估这些检验的假设,以及这些假设是否得到满足。Grisoni等人在《神经科学杂志》(Journal of Neuroscience)上发表的一项研究中就有一个很好的例子,研究了语言理解和产生过程中的语义处理。作者报告了ANOVA分析的关键信息,包括f值、自由度、p值和效应大小。他们检查了球度的违反,必要时,应用温室-盖瑟校正,明确报告校正值(Grisoni et al. 2024)。这种方法提供了如何处理假设违规的清晰性。统计结果可能受到异常值作用的影响,异常值可能改变效果估计或扭曲结论。这在相关分析中尤为重要。在这种情况下,偏离一般分布的单个数据点可能会使相关系数膨胀,产生一种人为的关联感,甚至颠倒相关性的方向——从负向正或反之亦然。出于这个原因,作者应该分析异常值的影响,并报告在统计分析过程中如何处理异常值(Makin and Orban de Xivry 2019)。在《大脑与行为》中有没有一个很好的例子来说明异常值的作用?值得一提的是与元分析相关的一些方面。例如,必须记住,i平方值并不是异质性的绝对度量。相反,它量化了由于研究之间的异质性而不是随机误差引起的总变异性的比例。此外,还有一个建议是关于相互作用效应的分析,例如在方差分析或回归中遇到的那些。仅仅说明测试了交互是不够的。作者应解释如何分析和解释相互作用,包括是否进行了任何后续测试或分解。注意描述性统计的选择也是可取的。在非参数环境中,仅报告均值和标准差是不合适的。相反,作者应该使用统计数据,如中位数和四分位数范围,以更准确地反映数据的分布特征。对丢失数据的处理也值得特别注意。事实上,它应该被清楚地描述,包括观察是否被排除,是否被推算,是否使用特定的统计模型进行处理。应解释所选择的方法,以便读者能够正确评估其对结果的影响。统计试验的明确报告也至关重要。作者应该说明使用了哪种测试以及在哪里应用。例如,像“U = 12;P = 0.03”使读者能够准确地理解进行了哪些测试,并评估结果的适当性和强度。澄清每个分析的目的——每个测试要解决的具体假设或研究问题,是至关重要的。不幸的是,仍然经常看到测试列表没有任何解释它们在分析中的作用。最后但并非最不重要的是,当使用高级统计方法时,作者应该简要解释为什么选择这些方法以及它们通常涉及的内容。这将使读者能够遵循分析的逻辑,考虑到即使是基本的统计概念有时也很难理解,这一点尤为重要。对于先进的方法,这种解释变得更加重要(Ordak et al. 2024; Ordak 2022)。我们希望这些基本的统计建议将指导作者准备他们的论文,并帮助他们在将他们的工作提交给大脑和行为之前关注关键的统计因素。迈克尔·奥达克:概念化,方法论,写作-审查和编辑,写作-原稿,可视化。Raffaella Bosurgi:写作-评论和编辑,概念化。作者声明无利益冲突。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Effective Statistics Are Essential for Strengthening a Paper in Brain and Behavior

Brain and Behavior, launched in 2011 by Andrei V. Alexandrov, is an open-access, multidisciplinary journal dedicated to the rapid publication of original research in neurology, neuroscience, psychology, and psychiatry. Since then, the journal has expanded its scope and today it publishes studies on brain function and behavior in relation to health, social, political systems, and the environment. It accepts research in clinical and basic sciences, including descriptive studies, preliminary research, and work that involves reproducibility. The goal is to support authors in publishing research while meeting high-publication standards. Brain and Behavior's philosophy is to disseminate good and sound science. A study to be good doesn't have to be novel or ground breaking but it has to be robust and has to follow rigour in methodology and statistics. Demonstration that a result can be replicated in independent studies is key in validating novel studies and good statistics can enhance the overall impact of the studies (Alexandrov 2011). Statistics plays three important roles in brain studies. They include the study of differences between brains in distinctive populations, the study of variability in the structure and functioning of the brain, and the study of data reduction on large-scale brain data. These areas are illustrated through research in brain connectivity, information flow, large-scale neuroimaging data, and predictive modeling. After outlining these roles, one can consider how statistical science continues to support brain decoding and how collaboration between statistical and neurobiological communities may contribute to addressing future questions (Chén 2019). The most recent data suggest that we are dealing with low quality in statistical reporting (Ordak 2024). By providing a set of basic statistical recommendations we aim to guide authors to help improve the quality and transparency of future submissions to Brain and Behavior.

One key recommendation concerns effect size measures. A p value indicates whether an observed effect is unlikely due to chance, but it does not communicate the strength or importance of the effect. Effect sizes allow researchers to determine how large a difference or association is and whether it may be practically or clinically meaningful. Including them supports interpretability and enables comparisons between studies. An example can be found in a study published in Brain and Behavior by Elkjaery et al., where the authors tested the emotional impact of sequentially induced anger and anxiety—emotions associated with opposing action tendencies (approach vs. avoidance) using independent samples t-tests and mixed-effects ANOVAs. Emotional impact in this context refers to changes in emotional states (e.g., anger, anxiety, and irritability) and behavioral inclinations, assessed through visual and verbal measures of action tendencies, following experimental manipulations designed to test the incompatible response hypothesis. They examined associations between outcome measures through correlation analysis and reported effect sizes using Pearson's r, Cohen's d, and partial eta squared. They interpreted these values based on the thresholds proposed by Cohen for small, medium, and large effects (Elkjaer et al. 2023). As Jacob Cohen emphasized, “The primary product of a research inquiry is one or more measures of effect size, not p values” (Cohen 1990). This is because effect size measures provide information about the magnitude and practical significance of a finding, helping readers to understand whether a result is not only statistically significant but also meaningful in real-world or clinical contexts. In contrast, p values alone do not convey how strong or relevant an effect is.

Another recommendation concerns the choice of appropriate post-hoc tests in the context of multiple comparisons. Selecting the correct post-hoc procedure depends on assumptions such as distribution type and homogeneity of variance. A common mistake is applying a post-hoc test without verifying these assumptions, which can lead to significantly different outcomes. A well-executed example can be found in a study published in Nature Communications on developmental mechanisms in the embryonic brain. For comparisons involving parametric data, they used one-way ANOVA followed by Tukey's test. For nonparametric data, they applied the Kruskal–Wallis test followed by Dunn's test. They clearly stated which tests were used under which conditions, and their rationale reflected statistical appropriateness, such as variance homogeneity (László et al. 2019).

Testing and reporting statistical assumptions is also an important aspect to consider. These include, for example, assumptions such as normality, sphericity, homogeneity of variance, equal group sizes, and the absence of multicollinearity among predictors. One of the most frequent mistakes is using parametric tests even when the assumptions are visibly violated. It is therefore recommended not only to state which tests were used, but also to explain how their assumptions were evaluated and whether they were met. A good example of such proper reporting appears in a study by Grisoni et al. in Journal of Neuroscience, examining semantic processing during language comprehension and production. The authors reported key information for ANOVA analyses, including F-values, degrees of freedom, p values, and effect sizes. They checked for violations of sphericity, and when necessary, applied the Greenhouse–Geisser correction, explicitly reporting the correction values (Grisoni et al. 2024). This approach offers clarity about how assumption violations were addressed.

Statistical outcomes can be influenced by the role of outliers which can change effect estimates, or distort conclusions. This is particularly relevant in correlation analyses. In such contexts, a single data point that deviates from the general distribution can inflate the correlation coefficient, create an artificial sense of association, or even reverse the direction of the correlation—from negative to positive or vice versa. For this reason, authors should analyze the influence of outliers and report how they were handled during statistical analysis (Makin and Orban de Xivry 2019). Do we have a good example in Brain and Behavior of how the role of outliers?

It is worth to mention also some aspects related to meta-analysis. For instance, it is essential to remember that the I-squared value is not an absolute measure of heterogeneity. Instead, it quantifies the proportion of total variability that is due to heterogeneity between studies rather than random error. Furthermore, one more recommendation concerns the analysis of interaction effects, such as those encountered in ANOVA or regression. It is not sufficient to simply state that an interaction was tested. Authors should explain how the interaction was analyzed and interpreted, including whether any follow-up tests or decompositions were carried out. It is also advisable to pay attention to the choice of descriptive statistics. In nonparametric contexts, it is not appropriate to report only means and standard deviations. Instead, authors should use statistics such as the median and interquartile range to more accurately reflect the distributional characteristics of the data.

Handling of missing data deserves also a particular attention. As a matter of fact, it should be clearly described, including whether observations were excluded, imputed, or handled using specific statistical models. The chosen method should be explained so that readers can properly assess its influence on the results. Clear reporting of statistical tests is also crucial. Authors should specify which test was used and where it was applied. For example, a notation like “U = 12; p = 0.03” enables readers to understand precisely which test was conducted and to evaluate the appropriateness and strength of the result. Clarifying the purpose of each analysis—what specific hypothesis or research question each test was meant to address, is crucial. Unfortunately, it is still common to see lists of tests without any explanation of their role in the analysis.

Last but not least, when using advanced statistical methods, authors should briefly explain why these approaches were selected and what they involve in general terms. This will allow readers to follow the logic of the analysis, which is particularly important given that even basic statistical concepts are sometimes poorly understood. For advanced methods, such explanation becomes even more critical (Ordak et al. 2024; Ordak 2022).

We hope that these basic statistical recommendations will guide authors in preparing their papers and to help them focusing on key statistical considerations before submitting their work to Brain and Behavior.

Michal Ordak: conceptualization, methodology, writing – review and editing, writing – original draft, visualization. Raffaella Bosurgi: writing – review and editing, conceptualization.

The authors declare no conflicts of interest.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Brain and Behavior
Brain and Behavior BEHAVIORAL SCIENCES-NEUROSCIENCES
CiteScore
5.30
自引率
0.00%
发文量
352
审稿时长
14 weeks
期刊介绍: Brain and Behavior is supported by other journals published by Wiley, including a number of society-owned journals. The journals listed below support Brain and Behavior and participate in the Manuscript Transfer Program by referring articles of suitable quality and offering authors the option to have their paper, with any peer review reports, automatically transferred to Brain and Behavior. * [Acta Psychiatrica Scandinavica](https://publons.com/journal/1366/acta-psychiatrica-scandinavica) * [Addiction Biology](https://publons.com/journal/1523/addiction-biology) * [Aggressive Behavior](https://publons.com/journal/3611/aggressive-behavior) * [Brain Pathology](https://publons.com/journal/1787/brain-pathology) * [Child: Care, Health and Development](https://publons.com/journal/6111/child-care-health-and-development) * [Criminal Behaviour and Mental Health](https://publons.com/journal/3839/criminal-behaviour-and-mental-health) * [Depression and Anxiety](https://publons.com/journal/1528/depression-and-anxiety) * Developmental Neurobiology * [Developmental Science](https://publons.com/journal/1069/developmental-science) * [European Journal of Neuroscience](https://publons.com/journal/1441/european-journal-of-neuroscience) * [Genes, Brain and Behavior](https://publons.com/journal/1635/genes-brain-and-behavior) * [GLIA](https://publons.com/journal/1287/glia) * [Hippocampus](https://publons.com/journal/1056/hippocampus) * [Human Brain Mapping](https://publons.com/journal/500/human-brain-mapping) * [Journal for the Theory of Social Behaviour](https://publons.com/journal/7330/journal-for-the-theory-of-social-behaviour) * [Journal of Comparative Neurology](https://publons.com/journal/1306/journal-of-comparative-neurology) * [Journal of Neuroimaging](https://publons.com/journal/6379/journal-of-neuroimaging) * [Journal of Neuroscience Research](https://publons.com/journal/2778/journal-of-neuroscience-research) * [Journal of Organizational Behavior](https://publons.com/journal/1123/journal-of-organizational-behavior) * [Journal of the Peripheral Nervous System](https://publons.com/journal/3929/journal-of-the-peripheral-nervous-system) * [Muscle & Nerve](https://publons.com/journal/4448/muscle-and-nerve) * [Neural Pathology and Applied Neurobiology](https://publons.com/journal/2401/neuropathology-and-applied-neurobiology)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信