Lois Player, Ryan Hughes, Kaloyan Mitev, Lorraine Whitmarsh, Christina Demski, Nicholas Nash, Trisevgeni Papakonstantinou, Mark Wilson
{"title":"The use of large language models for qualitative research: The Deep Computational Text Analyser (DECOTA).","authors":"Lois Player, Ryan Hughes, Kaloyan Mitev, Lorraine Whitmarsh, Christina Demski, Nicholas Nash, Trisevgeni Papakonstantinou, Mark Wilson","doi":"10.1037/met0000753","DOIUrl":"https://doi.org/10.1037/met0000753","url":null,"abstract":"<p><p>Machine-assisted approaches for free-text analysis are rising in popularity, owing to a growing need to rapidly analyze large volumes of qualitative data. In both research and policy settings, these approaches have promise in providing timely insights into public perceptions and enabling policymakers to understand their community's needs. However, current approaches still require expert human interpretation-posing a financial and practical barrier for those outside of academia. For the first time, we propose and validate the Deep Computational Text Analyser (DECOTA)-a novel machine learning methodology that automatically analyzes large free-text data sets and outputs concise themes. Building on structural topic modeling approaches, we used two fine-tuned large language models and sentence transformers to automatically derive \"codes\" and their corresponding \"themes\", as in inductive thematic analysis. To fully automate the process, we designed and validated a novel algorithm to choose the optimal number of \"topics\" for the structural topic modeling. DECOTA outputs key codes and themes, their prevalence, and how prevalence varies across covariates such as age and gender. Each code is accompanied by three representative quotes. Four data sets previously analyzed using thematic analysis were triangulated with DECOTA's codes and themes. We found that DECOTA is approximately 378 times faster and 1,920 times cheaper than human coding and consistently yields codes in agreement with or complementary to human coding (averaging 91.6% for codes and 90% for themes). The implications for evidence-based policy development, public engagement with policymaking, and psychometric measure development are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143803939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sijing S J Shao, Ziqian Xu, Qimin Liu, Kenneth McClure, Ross Jacobucci, Scott E Maxwell, Zhiyong Zhang
{"title":"Zero inflation in intensive longitudinal data: Why is it important and how should we deal with it?","authors":"Sijing S J Shao, Ziqian Xu, Qimin Liu, Kenneth McClure, Ross Jacobucci, Scott E Maxwell, Zhiyong Zhang","doi":"10.1037/met0000754","DOIUrl":"https://doi.org/10.1037/met0000754","url":null,"abstract":"<p><p>This study addresses the challenge of analyzing intensive longitudinal data (ILD) with zero-inflated autoregressive processes. ILD, characterized by intensive longitudinal measurements, often exhibit excessive zeros and temporal dependencies. Neglecting zero inflation or mishandling it can lead to biased parameter estimates and inaccurate conclusions. To overcome this issue, we propose a novel zero-inflated process change multilevel autoregressive (ZIP-CAR) model that incorporates zero inflation using a Bayesian framework. We compare the performance of the proposed method with existing methods through a simulation study and demonstrate its advantages in accurately estimating parameters and improving statistical power. Additionally, we apply the ZIP-CAR model to a real intensive longitudinal data set on problematic drinking behaviors, highlighting its effectiveness in capturing autoregressive and cross-lag effects while accounting for zero inflation. The results underscore the importance of addressing zero inflation in ILD analysis and provide practical recommendations for researchers. Our proposed model offers a valuable tool for analyzing ILD with zero-inflated autoregressive processes, facilitating a more comprehensive understanding of dynamic behavioral changes over time. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":""},"PeriodicalIF":7.6,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143804006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2022-12-22DOI: 10.1037/met0000536
Daniel McNeish, David P MacKinnon
{"title":"Intensive longitudinal mediation in Mplus.","authors":"Daniel McNeish, David P MacKinnon","doi":"10.1037/met0000536","DOIUrl":"10.1037/met0000536","url":null,"abstract":"<p><p>Much of the existing longitudinal mediation literature focuses on panel data where relatively few repeated measures are collected over a relatively broad timespan. However, technological advances in data collection (e.g., smartphones, wearables) have led to a proliferation of short duration, densely collected longitudinal data in behavioral research. These intensive longitudinal data differ in structure and focus relative to traditionally collected panel data. As a result, existing methodological resources do not necessarily extend to nuances present in the recent influx of intensive longitudinal data and designs. In this tutorial, we first cover potential limitations of traditional longitudinal mediation models to accommodate unique characteristics of intensive longitudinal data. Then, we discuss how recently developed dynamic structural equation models (DSEMs) may be well-suited for mediation modeling with intensive longitudinal data and can overcome some of the limitations associated with traditional approaches. We describe four increasingly complex intensive longitudinal mediation models: (a) stationary models where the indirect effect is constant over time and people, (b) person-specific models where the indirect effect varies across people, (c) dynamic models where the indirect effect varies across time, and (d) cross-classified models where the indirect effect varies across both time and people. We apply each model to a running example featuring a mobile health intervention designed to improve health behavior of individuals with binge eating disorder. In each example, we provide annotated Mplus code and interpretation of the output to guide empirical researchers through mediation modeling with this increasingly popular type of longitudinal data. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"393-415"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10419989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2023-03-23DOI: 10.1037/met0000553
Theiss Bendixen, Benjamin Grant Purzycki
{"title":"Cognitive and cultural models in psychological science: A tutorial on modeling free-list data as a dependent variable in Bayesian regression.","authors":"Theiss Bendixen, Benjamin Grant Purzycki","doi":"10.1037/met0000553","DOIUrl":"10.1037/met0000553","url":null,"abstract":"<p><p>Assessing relationships between culture and cognition is central to psychological science. To this end, free-listing is a useful methodological instrument. To facilitate its wider use, we here present the free-list method along with some of its many applications and offer a tutorial on how to prepare and statistically model free-list data as a dependent variable in Bayesian regression using openly available data and code. We further demonstrate the real-world utility of the outlined workflow by modeling within-subject agreement between a free-list task and a corollary item response scale on religious beliefs with a cross-culturally diverse sample. Overall, we fail to find a reliable statistical association between these two instruments, an original empirical finding that calls for further inquiry into identifying the cognitive processes that item response scales and free-list tasks tap into. Throughout, we argue that free-listing is an unambiguous measure of cognitive and cultural information and that the free-list method therefore has broad potential across the social sciences aiming to measure and model individual-level and cross-cultural variation in mental representations. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"223-239"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9367003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2023-05-11DOI: 10.1037/met0000571
Hongyun Liu, Ke-Hai Yuan, Hui Li
{"title":"A systematic framework for defining R-squared measures in mediation analysis.","authors":"Hongyun Liu, Ke-Hai Yuan, Hui Li","doi":"10.1037/met0000571","DOIUrl":"10.1037/met0000571","url":null,"abstract":"<p><p><i>R</i>-squared measures of explained variance are easy to understand, naturally interpretable, and widely used by substantive researchers. In mediation analysis, however, despite recent advances in measures of mediation effect, few effect sizes have good statistical properties. Also, most of these measures are only available for the simplest three-variable mediation model, especially for <i>R</i>²-type measures. By decomposing the mediator into two parts (i.e., the part related to the predictor and the part unrelated to the predictor), this article proposes a systematic framework to develop new effect-size measures of explained variance in mediation analysis. The framework can be easily extended to more complex mediation models and provides more delicate <i>R</i>² measures for empirical researchers. A Monte Carlo simulation study is conducted to examine the statistical properties of the proposed <i>R</i>² effect-size measure. Results show that the new R2 measure performs well in approximating the true value of the explained variance of the mediation effect. The use of the proposed measure is illustrated with empirical examples together with program code for its implementation. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"306-321"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9796970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2023-01-02DOI: 10.1037/met0000547
Julia F Strand
{"title":"Error tight: Exercises for lab groups to prevent research mistakes.","authors":"Julia F Strand","doi":"10.1037/met0000547","DOIUrl":"10.1037/met0000547","url":null,"abstract":"<p><p>Scientists, being human, make mistakes. We transcribe things incorrectly, we make errors in our code, and we intend to do things and then forget. The consequences of errors in research may be as minor as wasted time and annoyance, but may be as severe as losing months of work or having to retract an article. The purpose of this tutorial is to help lab groups identify places in their research workflow where errors may occur and identify ways to avoid them. To do this, this article applies concepts from human factors research on how to create lab cultures and workflows that are intended to minimize errors. This article does not provide a one-size-fits-all set of guidelines for specific practices to use (e.g., one platform on which to backup data); instead, it gives examples of ways that mistakes can occur in research along with recommendations for systems that avoid and detect them. This tutorial is intended to be used as a discussion prompt prior to a lab meeting to help researchers reflect on their own processes and implement safeguards to avoid future errors. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"416-424"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10694848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10468942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2023-05-11DOI: 10.1037/met0000570
Bhargab Chattopadhyay, Tathagata Bandyopadhyay, Ken Kelley, Jishnu J Padalunkal
{"title":"A sequential approach for noninferiority or equivalence of a linear contrast under cost constraints.","authors":"Bhargab Chattopadhyay, Tathagata Bandyopadhyay, Ken Kelley, Jishnu J Padalunkal","doi":"10.1037/met0000570","DOIUrl":"10.1037/met0000570","url":null,"abstract":"<p><p>Planning an appropriate sample size for a study involves considering several issues. Two important considerations are cost constraints and variability inherent in the population from which data will be sampled. Methodologists have developed sample size planning methods for two or more populations when testing for equivalence or noninferiority/superiority for a linear contrast of population means. Additionally, cost constraints and variance heterogeneity among populations have also been considered. We extend these methods by developing a theory for sequential procedures for testing the equivalence or noninferiority/superiority for a linear contrast of population means under cost constraints, which we prove to effectively utilize the allocated resources. Our method, due to the sequential framework, does not require prespecified values of unknown population variance(s), something that is historically an impediment to designing studies. Importantly, our method does not require an assumption of a specific type of distribution of the data in the relevant population from which the observations are sampled, as we make our developments in a data distribution-free context. We provide an illustrative example to show how the implementation of the proposed approach can be useful in applied research. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"425-439"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9796968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2023-01-02DOI: 10.1037/met0000545
Andrea Spoto, Massimo Nucci, Elena Prunetti, Michele Vicovaro
{"title":"Improving content validity evaluation of assessment instruments through formal content validity analysis.","authors":"Andrea Spoto, Massimo Nucci, Elena Prunetti, Michele Vicovaro","doi":"10.1037/met0000545","DOIUrl":"10.1037/met0000545","url":null,"abstract":"<p><p>Content validity is defined as the degree to which elements of an assessment instrument are relevant to and representative of the target construct. The available methods for content validity evaluation typically focus on the extent to which a set of items are relevant to the target construct, but do not afford precise evaluation of items' behavior, nor their exhaustiveness with respect to the elements of the target construct. Formal content validity analysis (FCVA) is a new procedure combining methods and techniques from various areas of psychological assessment, such as (a) constructing Boolean classification matrices to formalize relationships among an assessment instrument's items and target construct elements, and (b) computing interrater agreement indices. We discuss how FCVA can be extended through the implementation of a Bayesian procedure to improve the interrater agreement indices' accuracy (Bayesian formal content validity analysis [B-FCVA]). With respect to extant methods, FCVA and B-FCVA can provide a great amount of information about content validity while not demanding much more work for authors and experts. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"203-222"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10468941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2023-01-09DOI: 10.1037/met0000548
Kazuki Hori, Yasuo Miyazaki
{"title":"Cross-level covariance approach to the disaggregation of between-person effect and within-person effect.","authors":"Kazuki Hori, Yasuo Miyazaki","doi":"10.1037/met0000548","DOIUrl":"10.1037/met0000548","url":null,"abstract":"<p><p>In longitudinal studies, researchers are often interested in investigating relations between variables over time. A well-known issue in such a situation is that naively regressing an outcome on a predictor results in a coefficient that is a weighted average of the between-person and within-person effect, which is difficult to interpret. This article focuses on the cross-level covariance approach to disaggregating the two effects. Unlike the traditional centering/detrending approach, the cross-level covariance approach estimates the within-person effect by correlating the within-level observed variables with the between-level latent factors; thereby, partialing out the between-person association from the within-level predictor. With this key device kept, we develop novel latent growth curve models, which can estimate the between-person effects of the predictor's change rate. The proposed models are compared with an existing cross-level covariance model and a centering/detrending model through a real data analysis and a small simulation. The real data analysis shows that the interpretation of the effect parameters and other between-level parameters depends on how a model deals with the time-varying predictors. The simulation reveals that our proposed models can unbiasedly estimate the between- and within-person effects but tend to be more unstable than the existing models. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"340-373"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10495378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Psychological methodsPub Date : 2025-04-01Epub Date: 2023-05-11DOI: 10.1037/met0000574
Wen Wei Loh, Dongning Ren
{"title":"Estimating time-varying treatment effects in longitudinal studies.","authors":"Wen Wei Loh, Dongning Ren","doi":"10.1037/met0000574","DOIUrl":"10.1037/met0000574","url":null,"abstract":"<p><p>Longitudinal study designs are frequently used to investigate the effects of a naturally observed predictor (treatment) on an outcome over time. Because the treatment at each time point or wave is not randomly assigned, valid inferences of its causal effects require adjusting for covariates that confound each treatment-outcome association. But adjusting for covariates which are inevitably time-varying is fraught with difficulties. On the one hand, standard regression adjustment for variables affected by treatment can lead to severe bias. On the other hand, omitting time-varying covariates from confounding adjustment precipitates spurious associations that can lead to severe bias. Thus, either including or omitting time-varying covariates for confounding adjustment can lead to incorrect inferences. In this article, we introduce an estimation strategy from the causal inference literature for evaluating the causal effects of time-varying treatments in the presence of time-varying confounding. G-estimation of the treatment effect at a particular wave proceeds by carefully adjusting for only pre-treatment instances of all variables while dispensing with any post-treatment instances. The introduced approach has various appealing features. Effect modification by time-varying covariates can be investigated using covariate-treatment interactions. Treatment may be either continuous or noncontinuous with any mean model permitted. Unbiased estimation requires correctly specifying a mean model for either the treatment or the outcome, but not necessarily both. The treatment and outcome models can be fitted with standard regression functions. In summary, g-estimation is effective, flexible, robust, and relatively straightforward to implement. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":" ","pages":"240-253"},"PeriodicalIF":7.6,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9796967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}