Psychological methods最新文献

筛选
英文 中文
Yes stormtrooper, these are the droids you are looking for: Identifying and preliminarily evaluating bot and fraud detection strategies in online psychological research. 是的,暴风兵,这些就是你要找的机器人:识别和初步评估在线心理研究中的机器人和欺诈检测策略。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-03-03 DOI: 10.1037/met0000724
Thomas J Shaw, Cory J Cascalheira, Emily C Helminen, Cal D Brisbin, Skyler D Jackson, Melissa Simone, Tami P Sullivan, Abigail W Batchelder, Jillian R Scheer
{"title":"Yes stormtrooper, these are the droids you are looking for: Identifying and preliminarily evaluating bot and fraud detection strategies in online psychological research.","authors":"Thomas J Shaw, Cory J Cascalheira, Emily C Helminen, Cal D Brisbin, Skyler D Jackson, Melissa Simone, Tami P Sullivan, Abigail W Batchelder, Jillian R Scheer","doi":"10.1037/met0000724","DOIUrl":"10.1037/met0000724","url":null,"abstract":"<p><p>Bots (i.e., automated software programs that perform various tasks) and fraudulent responders pose a growing and costly threat to psychological research as well as affect data integrity. However, few studies have been published on this topic. (a) Describe our experience with bots and fraudulent responders using a case study, (b) present various bot and fraud detection tactics (BFDTs) and identify the number of suspected bot and fraudulent respondents removed, (c) propose a consensus confidence system for eliminating bots and fraudulent responders to determine the number of BFDTs researchers should use, and (d) examine the initial effectiveness of dynamic versus static BFDT protocols. This study is part of a larger 14-day experience sampling method study with trauma-exposed sexual minority cisgender women and transgender and/or nonbinary people. Faced with several bot and fraudulent responder infiltrations during data collection, we developed an evolving BFDT protocol to eliminate bots and fraudulent responders. Throughout this study, we received 24,053 responses on our baseline survey. After applying our BFDT protocols, we eliminated 99.75% of respondents that were likely bots or fraudulent responders. Some BFDTs seemed to be more effective and afford higher confidence than others, dynamic protocols seemed to be more effective than static protocols, and bots and fraudulent responders introduced significant bias in the results. This study advances online psychological research by curating one of the largest samples of bot and fraudulent respondents and pilot testing the largest number of BFDTs to date. Recommendations for future research are provided. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Experiments in daily life: When causal within-person effects do (not) translate into between-person differences. 日常生活中的实验:当人体内的因果效应(不)转化为人与人之间的差异时。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-03-03 DOI: 10.1037/met0000741
Andreas B Neubauer, Peter Koval, Michael J Zyphur, Ellen L Hamaker
{"title":"Experiments in daily life: When causal within-person effects do (not) translate into between-person differences.","authors":"Andreas B Neubauer, Peter Koval, Michael J Zyphur, Ellen L Hamaker","doi":"10.1037/met0000741","DOIUrl":"https://doi.org/10.1037/met0000741","url":null,"abstract":"<p><p>Intensive longitudinal designs allow researchers to study the dynamics of psychological processes in daily life. Yet, because these methods are usually observational, they do not allow strong causal inferences. A promising solution is to incorporate (micro-)randomized interventions within intensive longitudinal designs to uncover within-person (Wp) causal effects. However, it remains unclear whether (or how) the resulting Wp causal effects translate into between-person (Bp) differences in outcomes. In this work, we show analytically and using simulated data that Wp causal effects translate into Bp differences if there are no counteracting forces that modulate this cross-level translation. Three possible counteracting forces that we consider here are (a) contextual effects, (b) correlated random effects, and (c) cross-level interactions. We illustrate these principles using empirical data from a 10-day microrandomized mindfulness intervention study (<i>n</i> = 91), in which participants were randomized to complete a treatment or control task at each occasion. We conclude by providing recommendations regarding the design of microrandomized experiments in intensive longitudinal designs, as well as the statistical analyses of data resulting from these designs. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of temporal order selection on clustering intensive longitudinal data based on vector autoregressive models. 基于向量自回归模型的时间顺序选择对密集纵向数据聚类的影响。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-03-03 DOI: 10.1037/met0000747
Yaqi Li, Hairong Song, Bertus Jeronimus
{"title":"Impact of temporal order selection on clustering intensive longitudinal data based on vector autoregressive models.","authors":"Yaqi Li, Hairong Song, Bertus Jeronimus","doi":"10.1037/met0000747","DOIUrl":"https://doi.org/10.1037/met0000747","url":null,"abstract":"<p><p>When multivariate intensive longitudinal data are collected from a sample of individuals, the model-based clustering (e.g., vector autoregressive [VAR] based) approach can be used to cluster the individuals based on the (dis)similarity of their person-specific dynamics of the studied processes. To implement such clustering procedures, one needs to set the temporal order to be identical for all individuals; however, between-individual differences on temporal order have been evident for psychological and behavioral processes. One existing method is to apply the most complex structure or the highest order (HO) for all processes, while the other is to use the most parsimonious structure or the lowest order (LO). Up to date, the impact of these methods has not been well studied. In our simulation study, we examined the performance of HO and LO in conjunction with Gaussian mixture model (GMM) and k-means algorithms when a two-step VAR-based clustering procedure is implemented across various data conditions. We found that (a) the LO outperformed the HO in cluster identification, (b) the HO was more favorable than the LO in estimation of cluster-specific dynamics, (c) the GMM generally outperformed the <i>k</i>-means, and (d) the LO in conjunction with the GMM produced the best cluster identification outcome. We demonstrated the uses of the VAR-based clustering technique using the data collected from the \"How Nuts are the Dutch\" project. We then discussed the results from all our analyses, limitations of our study, and direction for future research, and meanwhile offered our recommendations on the empirical uses of the model-based clustering techniques. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143543199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Erroneous generalization-Exploring random error variance in reliability generalizations of psychological measurements. 错误概化——探讨心理测量信度概化中的随机误差方差。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-02-27 DOI: 10.1037/met0000740
Lukas J Beinhauer, Jens H Fünderich, Frank Renkewitz
{"title":"Erroneous generalization-Exploring random error variance in reliability generalizations of psychological measurements.","authors":"Lukas J Beinhauer, Jens H Fünderich, Frank Renkewitz","doi":"10.1037/met0000740","DOIUrl":"https://doi.org/10.1037/met0000740","url":null,"abstract":"<p><p>Reliability generalization (RG) studies frequently interpret meta-analytic heterogeneity in score reliability as evidence of differences in an instrument's measurement quality across administrations. However, such interpretations ignore the fact that, under classical test theory, score reliability depends on two parameters: true score variance and error score variance. True score variance refers to the actual variation in the trait we aim to measure, while error score variance refers to nonsystematic variation arising in the observed, manifest variable. If the error score variance remains constant, variations in true score variance can result in heterogeneity in reliability coefficients. While this argument is not new, we argue that current approaches to addressing this issue in the RG literature are insufficient. Instead, we propose enriching an RG study with Boot-Err: Explicitly modeling the error score variance using bootstrapping and meta-analytic techniques. Through a comprehensive simulation scheme, we demonstrate that score reliability can vary while the measuring quality remains unaffected. The simulation also illustrates how explicitly modeling error score variances may improve inferences concerning random measurement error and under which conditions such enhancements occur. Furthermore, using openly available direct replication data, we show how explicitly modeling error score variance allows for an assessment to what extent measurement quality can be described as identical across administration sites. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143524271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the probability of reaching correct conclusions about congruence hypotheses: Integrating statistical equivalence testing into response surface analysis. 提高对同余假设得出正确结论的概率:将统计等价检验融入响应面分析。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-02-24 DOI: 10.1037/met0000743
Sarah Humberg, Felix D Schönbrodt, Steffen Nestler
{"title":"Improving the probability of reaching correct conclusions about congruence hypotheses: Integrating statistical equivalence testing into response surface analysis.","authors":"Sarah Humberg, Felix D Schönbrodt, Steffen Nestler","doi":"10.1037/met0000743","DOIUrl":"https://doi.org/10.1037/met0000743","url":null,"abstract":"<p><p>Many psychological theories imply that the degree of congruence between two variables (e.g., self-rated and objectively measured intelligence) is related to some psychological outcome (e.g., life satisfaction). Such congruence hypotheses can be tested with response surface analysis (RSA), in which a second-order polynomial regression model is estimated and suitably interpreted. Whereas several strategies exist for this interpretation, they all contain rationales that diminish the probability of drawing correct conclusions. For example, a frequently applied strategy involves calculating six auxiliary parameters from the estimated regression weights and accepting the congruence hypothesis if they satisfy certain conditions. In testing the conditions, a nonsignificant null-hypothesis test of some parameters is taken as evidence that the parameter is zero. This interpretation is formally inadmissible and adversely affects the probability of making correct decisions about the congruence hypothesis. We address this limitation of the six-parameter strategy and other RSA strategies by proposing that statistical equivalence testing (SET) be integrated into RSA. We compare the existing and new RSA strategies with a simulation study and find that the SET strategies are sensible alternatives to the existing strategies. We provide code templates for implementing the SET strategies. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143492805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating statistical fit of confirmatory bifactor models: Updated recommendations and a review of current practice. 评估验证性双因素模型的统计拟合:更新的建议和当前实践的回顾。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-02-20 DOI: 10.1037/met0000730
Sijia Li, Victoria Savalei
{"title":"Evaluating statistical fit of confirmatory bifactor models: Updated recommendations and a review of current practice.","authors":"Sijia Li, Victoria Savalei","doi":"10.1037/met0000730","DOIUrl":"https://doi.org/10.1037/met0000730","url":null,"abstract":"<p><p>Confirmatory bifactor models have become very popular in psychological applications, but they are increasingly criticized for statistical pitfalls such as tendency to overfit, tendency to produce anomalous results, instability of solutions, and underidentification problems. In part to combat this state of affairs, many different reliability and dimensionality measures have been proposed to help researchers evaluate the quality of the obtained bifactor solution. However, in empirical practice, the evaluation of bifactor models is largely based on structural equation model fit indices. Other critical indicators of solution quality, such as patterns of general and group factor loadings, whether all estimates are interpretable, and values of reliability coefficients, are often not taken into account. In addition, in the methodological literature, some confusion exists about the appropriate interpretation and application of some bifactor reliability coefficients. In this article, we accomplish several goals. First, we review reliability coefficients for bifactor models and their correct interpretations, and we provide expectations for their values. Second, to help steer researchers away from structural equation model fit indices and to improve current practice, we provide a checklist for evaluating the statistical fit of bifactor models. Third, we evaluate the state of current practice by examining 96 empirical articles employing confirmatory bifactor models across different areas of psychology. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143468901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is a less wrong model always more useful? Methodological considerations for using ant colony optimization in measure development. 错误少的模型总是更有用吗?在度量开发中使用蚁群优化的方法学考虑。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-02-20 DOI: 10.1037/met0000734
Yixiao Dong, Denis Dumas
{"title":"Is a less wrong model always more useful? Methodological considerations for using ant colony optimization in measure development.","authors":"Yixiao Dong, Denis Dumas","doi":"10.1037/met0000734","DOIUrl":"https://doi.org/10.1037/met0000734","url":null,"abstract":"<p><p>With the advancement of artificial intelligence (AI), many AI-derived techniques have been adapted into psychological and behavioral science research, including measure development, which is a key task for psychometricians and methodologists. Ant colony optimization (ACO) is an AI-derived metaheuristic algorithm that has been integrated into the structural equation modeling framework to search for optimal (or near optimal) solutions. ACO-driven measurement modeling is an emerging method for constructing scales, but psychological researchers generally lack the necessary understanding of ACO-optimized models and outcome solutions. This article aims to investigate whether ACO solutions are indeed optimal and whether the optimized measurement models of ACO are always more psychologically useful compared to conventional ones built by human psychometricians. To work toward these goals, we highlight five essential methodological considerations for using ACO in measure development: (a) pursuing a local or global optimum, (b) avoiding a subjective optimum, (c) optimizing content validity, (d) bridging the gap between theory and model, and (e) recognizing limitations of unidirectionality. A joint data set containing item-level data from German (<i>n</i> = 297) and the United States (<i>n</i> = 334) samples was employed, and seven illustrative ACO analyses with various configurations were conducted to illustrate or facilitate the discussions of these considerations. We conclude that measurement solutions from the current ACO have not yet become optimal or close to optimal, and the optimized measurement models of ACO may be becoming more useful. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143468906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information theory, machine learning, and Bayesian networks in the analysis of dichotomous and Likert responses for questionnaire psychometric validation. 信息理论、机器学习和贝叶斯网络在问卷心理测量验证的二分和李克特反应分析中的应用。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-02-17 DOI: 10.1037/met0000713
Matteo Orsoni, Mariagrazia Benassi, Marco Scutari
{"title":"Information theory, machine learning, and Bayesian networks in the analysis of dichotomous and Likert responses for questionnaire psychometric validation.","authors":"Matteo Orsoni, Mariagrazia Benassi, Marco Scutari","doi":"10.1037/met0000713","DOIUrl":"https://doi.org/10.1037/met0000713","url":null,"abstract":"<p><p>Questionnaire validation is indispensable in psychology and medicine and is essential for understanding differences across diverse populations in the measured construct. While traditional latent factor models have long dominated psychometric validation, recent advancements have introduced alternative methodologies, such as the \"network framework.\" This study presents a pioneering approach integrating information theory, machine learning (ML), and Bayesian networks (BNs) into questionnaire validation. Our proposed framework considers psychological constructs as complex, causally interacting systems, bridging theories, and empirical hypotheses. We emphasize the crucial link between questionnaire items and theoretical frameworks, validated through the known-groups method for effective differentiation of clinical and nonclinical groups. Information theory measures such as Jensen-Shannon divergence distance and ML for item selection enhance discriminative power while contextually reducing respondent burden. BNs are employed to uncover conditional dependences between items, illuminating the intricate systems underlying psychological constructs. Through this integrated framework encompassing item selection, theory formulation, and construct validation stages, we empirically validate our method on two simulated data sets-one with dichotomous and the other with Likert-scale data-and a real data set. Our approach demonstrates effectiveness in standard questionnaire research and validation practices, providing insights into criterion validity, content validity, and construct validity of the instrument. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143441822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Bayesian meta-regression: Model-averaged moderation analysis in the presence of publication bias. 稳健贝叶斯元回归:存在发表偏倚的模型平均适度分析。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-02-17 DOI: 10.1037/met0000737
František Bartoš, Maximilian Maier, T D Stanley, Eric-Jan Wagenmakers
{"title":"Robust Bayesian meta-regression: Model-averaged moderation analysis in the presence of publication bias.","authors":"František Bartoš, Maximilian Maier, T D Stanley, Eric-Jan Wagenmakers","doi":"10.1037/met0000737","DOIUrl":"10.1037/met0000737","url":null,"abstract":"<p><p>Meta-regression is an essential meta-analytic tool for investigating sources of heterogeneity and assessing the impact of moderators. However, existing methods for meta-regression have limitations, such as inadequate consideration of model uncertainty and poor performance under publication bias. To overcome these limitations, we extend robust Bayesian meta-analysis (RoBMA) to meta-regression (RoBMA-regression). RoBMA-regression allows for moderator analyses while simultaneously taking into account the uncertainty about the presence and impact of other factors (i.e., the main effect, heterogeneity, publication bias, and other potential moderators). The methodology presents a coherent way of assessing the evidence for and against the presence of both continuous and categorical moderators. We further employ a Savage-Dickey density ratio test to quantify the evidence for and against the presence of the effect at different levels of categorical moderators. We illustrate RoBMA-regression in an empirical example and demonstrate its performance in a simulation study. We implemented the methodology in the RoBMA R package. Overall, RoBMA-regression presents researchers with a powerful and flexible tool for conducting robust and informative meta-regression analyses. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143441828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Meta-analyzing nonpreregistered and preregistered studies. 对未预注册和预注册研究进行meta分析。
IF 7.6 1区 心理学
Psychological methods Pub Date : 2025-02-17 DOI: 10.1037/met0000719
Robbie C M van Aert
{"title":"Meta-analyzing nonpreregistered and preregistered studies.","authors":"Robbie C M van Aert","doi":"10.1037/met0000719","DOIUrl":"https://doi.org/10.1037/met0000719","url":null,"abstract":"<p><p>Preregistration is gaining ground in psychology, and a consequence of this is that preregistered studies are more often included in meta-analyses. Preregistered studies likely mitigate the effect of publication bias in a meta-analysis, because preregistered studies can be located in the registries they were registered in even if they do not get published. However, current meta-analysis methods do not take into account that preregistered studies are less susceptible to publication bias. Traditional methods treat all studies as equivalent while meta-analytic conclusions can be improved by taking advantage of preregistered studies. The goal of this article is to introduce the hybrid extended meta-analysis (HYEMA) method that takes into account whether a study is preregistered or not and corrects for publication bias in only the nonpreregistered studies. The proposed method is applied to two meta-analyses on prominent effects in the psychological literature: the red-romance hypothesis and money priming. Applying HYEMA to these meta-analyses shows that the average effect size estimate is substantially closer to zero than the estimate of the random-effects meta-analysis model. Two simulation studies tailored to the two applications are also presented to illustrate the method's superior performance compared to the random-effects meta-analysis model and precision-effect test and precision-effect estimate with standard error when publication bias is present. Hence, I recommend to apply HYEMA as a sensitivity analysis if a mix of both preregistered and nonpreregistered studies are present in a meta-analysis. R code as well as a web application (https://rcmvanaert.shinyapps.io/HYEMA) have been developed and are described in the article to facilitate application of the method. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143441824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信