Psychological methods最新文献

筛选
英文 中文
Planning falsifiable confirmatory research. 规划可证伪的证实性研究。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-12 DOI: 10.1037/met0000639
James E Kennedy
{"title":"Planning falsifiable confirmatory research.","authors":"James E Kennedy","doi":"10.1037/met0000639","DOIUrl":"https://doi.org/10.1037/met0000639","url":null,"abstract":"<p><p>Falsifiable research is a basic goal of science and is needed for science to be self-correcting. However, the methods for conducting falsifiable research are not widely known among psychological researchers. Describing the effect sizes that can be confidently investigated in confirmatory research is as important as describing the subject population. Power curves or operating characteristics provide this information and are needed for both frequentist and Bayesian analyses. These evaluations of inferential error rates indicate the performance (validity and reliability) of the planned statistical analysis. For meaningful, falsifiable research, the study plan should specify a minimum effect size that is the goal of the study. If any tiny effect, no matter how small, is considered meaningful evidence, the research is not falsifiable and often has negligible predictive value. Power ≥ .95 for the minimum effect is optimal for confirmatory research and .90 is good. From a frequentist perspective, the statistical model for the alternative hypothesis in the power analysis can be used to obtain a <i>p</i> value that can reject the alternative hypothesis, analogous to rejecting the null hypothesis. However, confidence intervals generally provide more intuitive and more informative inferences than p values. The preregistration for falsifiable confirmatory research should include (a) criteria for evidence the alternative hypothesis is true, (b) criteria for evidence the alternative hypothesis is false, and (c) criteria for outcomes that will be inconclusive. Not all confirmatory studies are or need to be falsifiable. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142819026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A simple statistical framework for small sample studies. 小样本研究的简单统计框架。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-05 DOI: 10.1037/met0000710
D Samuel Schwarzkopf, Zien Huang
{"title":"A simple statistical framework for small sample studies.","authors":"D Samuel Schwarzkopf, Zien Huang","doi":"10.1037/met0000710","DOIUrl":"https://doi.org/10.1037/met0000710","url":null,"abstract":"<p><p>Most studies in psychology, neuroscience, and life science research make inferences about how strong an effect is on average in the population. Yet, many research questions could instead be answered by testing for the universality of the phenomenon under investigation. By using reliable experimental designs that maximize both sensitivity and specificity of individual experiments, each participant or subject can be treated as an independent replication. This approach is common in certain subfields. To date, there is however no formal approach for calculating the evidential value of such small sample studies and to define a priori evidence thresholds that must be met to draw meaningful conclusions. Here we present such a framework, based on the ratio of binomial probabilities between a model assuming the universality of the phenomenon versus the null hypothesis that any incidence of the effect is sporadic. We demonstrate the benefits of this approach, which permits strong conclusions from samples as small as two to five participants and the flexibility of sequential testing. This approach will enable researchers to preregister experimental designs based on small samples and thus enhance the utility and credibility of such studies. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142786849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of noncentral t and distribution-free methods when using sequential procedures to control the width of a confidence interval for a standardized mean difference. 在使用顺序程序控制标准化平均差的置信区间宽度时,非中心t和无分布方法的比较。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-01 DOI: 10.1037/met0000671
Douglas A Fitts
{"title":"Comparison of noncentral t and distribution-free methods when using sequential procedures to control the width of a confidence interval for a standardized mean difference.","authors":"Douglas A Fitts","doi":"10.1037/met0000671","DOIUrl":"https://doi.org/10.1037/met0000671","url":null,"abstract":"<p><p>sequential stopping rule (SSR) can generate a confidence interval (CI) for a standardized mean difference <i>d</i> that has an exact standardized width, ω. Two methods were tested using a broad range of ω and standardized effect sizes δ. A noncentral t (NCt) CI used with normally distributed data had coverages that were nominal at narrow widths but were slightly inflated at wider widths. A distribution-free (Dist-Free) method used with normally distributed data exhibited superior coverage and stopped on average at the expected sample sizes. When used with moderate to severely skewed lognormal distributions, the coverage was too low at large effect sizes even with a very narrow width where Dist-Free was expected to perform well, and the mean stopping sample sizes were absurdly elevated (thousands per group). SSR procedures negatively biased both the raw difference and the \"unbiased\" Hedges' g in the stopping sample with all methods and distributions. The <i>d</i> was the less biased estimator of δ when the distribution was normal. The poor coverage with a lognormal distribution resulted from a large positive bias in <i>d</i> that increased as a function of both ω and δ. Coverage and point estimation were little improved by using g instead of <i>d</i>. Increased stopping time resulted from the way an estimate of the variance is calculated when it encounters occasional extreme scores generated from the skewed distribution. The Dist-Free SSR method was superior when the distribution was normal or only slightly skewed but is not recommended with moderately skewed distributions. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"29 6","pages":"1188-1208"},"PeriodicalIF":7.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142882871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correcting bias in extreme groups design using a missing data approach. 使用缺失数据方法纠正极端群体设计中的偏差。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-01 Epub Date: 2022-07-18 DOI: 10.1037/met0000508
Lihan Chen, Rachel T Fouladi
{"title":"Correcting bias in extreme groups design using a missing data approach.","authors":"Lihan Chen, Rachel T Fouladi","doi":"10.1037/met0000508","DOIUrl":"10.1037/met0000508","url":null,"abstract":"<p><p>Extreme groups design (EGD) refers to the use of a screening variable to inform further data collection, such that only participants with the lowest and highest scores are recruited in subsequent stages of the study. It is an effective way to improve the power of a study under a limited budget, but produces biased standardized estimates. We demonstrate that the bias in EGD results from its inherent <i>missing at random</i> mechanism, which can be corrected using modern missing data techniques such as <i>full information maximum likelihood</i> (FIML). Further, we provide a tutorial on computing correlations in EGD data with FIML using R. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1123-1131"},"PeriodicalIF":7.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9922061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable network inference from unreliable data: A tutorial on latent network modeling using STRAND. 从不可靠的数据中得出可靠的网络推断:使用 STRAND 的潜在网络建模教程。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-01 Epub Date: 2023-03-06 DOI: 10.1037/met0000519
Daniel Redhead, Richard McElreath, Cody T Ross
{"title":"Reliable network inference from unreliable data: A tutorial on latent network modeling using STRAND.","authors":"Daniel Redhead, Richard McElreath, Cody T Ross","doi":"10.1037/met0000519","DOIUrl":"10.1037/met0000519","url":null,"abstract":"<p><p>Social network analysis provides an important framework for studying the causes, consequences, and structure of social ties. However, standard self-report measures-for example, as collected through the popular \"name-generator\" method-do not provide an impartial representation of such ties, be they transfers, interactions, or social relationships. At best, they represent perceptions filtered through the cognitive biases of respondents. Individuals may, for example, report transfers that did not really occur, or forget to mention transfers that really did. The propensity to make such reporting inaccuracies is both an individual-level and item-level characteristic-variable across members of any given group. Past research has highlighted that many network-level properties are highly sensitive to such reporting inaccuracies. However, there remains a dearth of easily deployed statistical tools that account for such biases. To address this issue, we provide a latent network model that allows researchers to jointly estimate parameters measuring both reporting biases and a latent, underlying social network. Building upon past research, we conduct several simulation experiments in which network data are subject to various reporting biases, and find that these reporting biases strongly impact fundamental network properties. These impacts are not adequately remedied using the most frequently deployed approaches for network reconstruction in the social sciences (i.e., treating either the union or the intersection of double-sampled data as the true network), but are appropriately resolved through the use of our latent network models. To make implementation of our models easier for end-users, we provide a fully documented R package, STRAND, and include a tutorial illustrating its functionality when applied to empirical food/money sharing data from a rural Colombian population. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1100-1122"},"PeriodicalIF":7.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10821258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
One-tailed tests: Let's do this (responsibly). 单尾测试:让我们(负责任地)这样做。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-01 Epub Date: 2023-11-02 DOI: 10.1037/met0000610
Andrew H Hales
{"title":"One-tailed tests: Let's do this (responsibly).","authors":"Andrew H Hales","doi":"10.1037/met0000610","DOIUrl":"10.1037/met0000610","url":null,"abstract":"<p><p>When preregistered, one-tailed tests control false-positive results at the same rate as two-tailed tests. They are also more powerful, provided the researcher correctly identified the direction of the effect. So it is surprising that they are not more common in psychology. Here I make an argument in favor of one-tailed tests and address common mistaken objections that researchers may have to using them. The arguments presented here only apply in situations where the test is clearly preregistered. If power is truly as urgent an issue as statistics reformers suggest, then the deliberate and thoughtful use of preregistered one-tailed tests ought to be not only permitted, but encouraged in cases where researchers desire greater power. One-tailed tests are especially well suited for applied questions, replications of previously documented effects, or situations where directionally unexpected effects would be meaningless. Preregistered one-tailed tests can sensibly align the researcher's stated theory with their tested hypothesis, bring a coherence to the practice of null hypothesis statistical testing, and produce generally more persuasive results. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1209-1218"},"PeriodicalIF":7.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71426349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing random effects models, ordinary least squares, or fixed effects with cluster robust standard errors for cross-classified data. 比较交叉分类数据的随机效应模型、普通最小二乘法或带有聚类稳健标准误差的固定效应。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-01 Epub Date: 2023-03-09 DOI: 10.1037/met0000538
Young Ri Lee, James E Pustejovsky
{"title":"Comparing random effects models, ordinary least squares, or fixed effects with cluster robust standard errors for cross-classified data.","authors":"Young Ri Lee, James E Pustejovsky","doi":"10.1037/met0000538","DOIUrl":"10.1037/met0000538","url":null,"abstract":"<p><p>Cross-classified random effects modeling (CCREM) is a common approach for analyzing cross-classified data in psychology, education research, and other fields. However, when the focus of a study is on the regression coefficients at Level 1 rather than on the random effects, ordinary least squares regression with cluster robust variance estimators (OLS-CRVE) or fixed effects regression with CRVE (FE-CRVE) could be appropriate approaches. These alternative methods are potentially advantageous because they rely on weaker assumptions than those required by CCREM. We conducted a Monte Carlo Simulation study to compare the performance of CCREM, OLS-CRVE, and FE-CRVE in models, including conditions where homoscedasticity assumptions and exogeneity assumptions held and conditions where they were violated, as well as conditions with unmodeled random slopes. We found that CCREM out-performed the alternative approaches when its assumptions are all met. However, when homoscedasticity assumptions are violated, OLS-CRVE and FE-CRVE provided similar or better performance than CCREM. When the exogeneity assumption is violated, only FE-CRVE provided adequate performance. Further, OLS-CRVE and FE-CRVE provided more accurate inferences than CCREM in the presence of unmodeled random slopes. Thus, we recommend two-way FE-CRVE as a good alternative to CCREM, particularly if the homoscedasticity or exogeneity assumptions of the CCREM might be in doubt. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1084-1099"},"PeriodicalIF":7.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10871401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving hierarchical models of individual differences: An extension of Goldberg's bass-ackward method. 改进个体差异的层次模型:戈德伯格低音后退法的扩展。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-01 Epub Date: 2023-02-13 DOI: 10.1037/met0000546
Miriam K Forbes
{"title":"Improving hierarchical models of individual differences: An extension of Goldberg's bass-ackward method.","authors":"Miriam K Forbes","doi":"10.1037/met0000546","DOIUrl":"10.1037/met0000546","url":null,"abstract":"<p><p>Goldberg's (2006) bass-ackward approach to elucidating the hierarchical structure of individual differences data has been used widely to improve our understanding of the relationships among constructs of varying levels of granularity. The traditional approach has been to extract a single component or factor on the first level of the hierarchy, two on the second level, and so on, treating the correlations between adjoining levels akin to path coefficients in a hierarchical structure. This article proposes three modifications to the traditional approach with a particular focus on examining associations among <i>all</i> levels of the hierarchy: (a) identify and remove redundant elements that perpetuate through multiple levels of the hierarchy; (b) (optionally) identify and remove artefactual elements; and (c) plot the strongest correlations among the remaining elements to identify their hierarchical associations. Together these steps can offer a simpler and more complete picture of the underlying hierarchical structure among a set of observed variables. The rationale for each step is described, illustrated in a hypothetical example and three basic simulations, and then applied in real data. The results are compared with the traditional bass-ackward approach together with agglomerative hierarchical cluster analysis, and a basic tutorial with code is provided to apply the extended bass-ackward approach in other data. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1062-1073"},"PeriodicalIF":7.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10696269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ubiquitous bias and false discovery due to model misspecification in analysis of statistical interactions: The role of the outcome's distribution and metric properties. 统计交互作用分析中因模型错误规范而导致的无处不在的偏差和错误发现:结果分布和度量特性的作用。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-12-01 Epub Date: 2022-10-06 DOI: 10.1037/met0000532
Benjamin W Domingue, Klint Kanopka, Sam Trejo, Mijke Rhemtulla, Elliot M Tucker-Drob
{"title":"Ubiquitous bias and false discovery due to model misspecification in analysis of statistical interactions: The role of the outcome's distribution and metric properties.","authors":"Benjamin W Domingue, Klint Kanopka, Sam Trejo, Mijke Rhemtulla, Elliot M Tucker-Drob","doi":"10.1037/met0000532","DOIUrl":"10.1037/met0000532","url":null,"abstract":"<p><p>Studies of interaction effects are of great interest because they identify crucial interplay between predictors in explaining outcomes. Previous work has considered several potential sources of statistical bias and substantive misinterpretation in the study of interactions, but less attention has been devoted to the role of the outcome variable in such research. Here, we consider bias and false discovery associated with estimates of interaction parameters as a function of the distributional and metric properties of the outcome variable. We begin by illustrating that, for a variety of noncontinuously distributed outcomes (i.e., binary and count outcomes), attempts to use the linear model for recovery leads to catastrophic levels of bias and false discovery. Next, focusing on transformations of normally distributed variables (i.e., censoring and noninterval scaling), we show that linear models again produce spurious interaction effects. We provide explanations offering geometric and algebraic intuition as to why interactions are a challenge for these incorrectly specified models. In light of these findings, we make two specific recommendations. First, a careful consideration of the outcome's distributional properties should be a standard component of interaction studies. Second, researchers should approach research focusing on interactions with heightened levels of scrutiny. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"1164-1179"},"PeriodicalIF":7.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10369499/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9862990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Why multiple hypothesis test corrections provide poor control of false positives in the real world. 为什么多重假设检验校正在现实世界中无法很好地控制假阳性?
IF 7.6 1区 心理学
Psychological methods Pub Date : 2024-11-21 DOI: 10.1037/met0000678
Stanley E Lazic
{"title":"Why multiple hypothesis test corrections provide poor control of false positives in the real world.","authors":"Stanley E Lazic","doi":"10.1037/met0000678","DOIUrl":"https://doi.org/10.1037/met0000678","url":null,"abstract":"<p><p>Most scientific disciplines use significance testing to draw conclusions about experimental or observational data. This classical approach provides a theoretical guarantee for controlling the number of false positives across a set of hypothesis tests, making it an appealing framework for scientists seeking to limit the number of false effects or associations that they claim to observe. Unfortunately, this theoretical guarantee applies to few experiments, and the true false positive rate (FPR) is much higher. Scientists have plenty of freedom to choose the error rate to control, the tests to include in the adjustment, and the method of correction, making strong error control difficult to attain. In addition, hypotheses are often tested after finding unexpected relationships or patterns, the data are analyzed in several ways, and analyses may be run repeatedly as data accumulate. As a result, adjusted <i>p</i> values are too small, incorrect conclusions are often reached, and results are harder to reproduce. In the following, I argue why the FPR is rarely controlled meaningfully and why shrinking parameter estimates is preferable to <i>p</i> value adjustments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142688594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信