{"title":"Interactions between latent variables in count regression models.","authors":"Christoph Kiefer, Sarah Wilker, Axel Mayer","doi":"10.3758/s13428-024-02483-4","DOIUrl":"10.3758/s13428-024-02483-4","url":null,"abstract":"<p><p>In psychology and the social sciences, researchers often model count outcome variables accounting for latent predictors and their interactions. Even though neglecting measurement error in such count regression models (e.g., Poisson or negative binomial regression) can have unfavorable consequences like attenuation bias, such analyses are often carried out in the generalized linear model (GLM) framework using fallible covariates such as sum scores. An alternative is count regression models based on structural equation modeling, which allow to specify latent covariates and thereby account for measurement error. However, the issue of how and when to include interactions between latent covariates or between latent and manifest covariates is rarely discussed for count regression models. In this paper, we present a latent variable count regression model (LV-CRM) allowing for latent covariates as well as interactions among both latent and manifest covariates. We conducted three simulation studies, investigating the estimation accuracy of the LV-CRM and comparing it to GLM-based count regression models. Interestingly, we found that even in scenarios with high reliabilities, the regression coefficients from a GLM-based model can be severely biased. In contrast, even for moderate sample sizes, the LV-CRM provided virtually unbiased regression coefficients. Additionally, statistical inferences yielded mixed results for the GLM-based models (i.e., low coverage rates, but acceptable empirical detection rates), but were generally acceptable using the LV-CRM. We provide an applied example from clinical psychology illustrating how the LV-CRM framework can be used to model count regressions with latent interactions.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8932-8954"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142071898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alejandrina Cristia, Lucas Gautheron, Zixing Zhang, Björn Schuller, Camila Scaff, Caroline Rowland, Okko Räsänen, Loann Peurey, Marvin Lavechin, William Havard, Caitlin M Fausey, Margaret Cychosz, Elika Bergelson, Heather Anderson, Najla Al Futaisi, Melanie Soderstrom
{"title":"Establishing the reliability of metrics extracted from long-form recordings using LENA and the ACLEW pipeline.","authors":"Alejandrina Cristia, Lucas Gautheron, Zixing Zhang, Björn Schuller, Camila Scaff, Caroline Rowland, Okko Räsänen, Loann Peurey, Marvin Lavechin, William Havard, Caitlin M Fausey, Margaret Cychosz, Elika Bergelson, Heather Anderson, Najla Al Futaisi, Melanie Soderstrom","doi":"10.3758/s13428-024-02493-2","DOIUrl":"10.3758/s13428-024-02493-2","url":null,"abstract":"<p><p>Long-form audio recordings are increasingly used to study individual variation, group differences, and many other topics in theoretical and applied fields of developmental science, particularly for the description of children's language input (typically speech from adults) and children's language output (ranging from babble to sentences). The proprietary LENA software has been available for over a decade, and with it, users have come to rely on derived metrics like adult word count (AWC) and child vocalization counts (CVC), which have also more recently been derived using an open-source alternative, the ACLEW pipeline. Yet, there is relatively little work assessing the reliability of long-form metrics in terms of the stability of individual differences across time. Filling this gap, we analyzed eight spoken-language datasets: four from North American English-learning infants, and one each from British English-, French-, American English-/Spanish-, and Quechua-/Spanish-learning infants. The audio data were analyzed using two types of processing software: LENA and the ACLEW open-source pipeline. When all corpora were included, we found relatively low to moderate reliability (across multiple recordings, intraclass correlation coefficient attributed to the child identity [Child ICC], was < 50% for most metrics). There were few differences between the two pipelines. Exploratory analyses suggested some differences as a function of child age and corpora. These findings suggest that, while reliability is likely sufficient for various group-level analyses, caution is needed when using either LENA or ACLEW tools to study individual variation. We also encourage improvement of extant tools, specifically targeting accurate measurement of individual variation.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8588-8607"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Julien P Irmer, Andreas G Klein, Karin Schermelleh-Engel
{"title":"Estimating power in complex nonlinear structural equation modeling including moderation effects: The powerNLSEM R-package.","authors":"Julien P Irmer, Andreas G Klein, Karin Schermelleh-Engel","doi":"10.3758/s13428-024-02476-3","DOIUrl":"10.3758/s13428-024-02476-3","url":null,"abstract":"<p><p>The model-implied simulation-based power estimation (MSPE) approach is a new general method for power estimation (Irmer et al., 2024). MSPE was developed especially for power estimation of non-linear structural equation models (SEM), but it also can be applied to linear SEM and manifest models using the R package powerNLSEM. After first providing some information about MSPE and the new adaptive algorithm that automatically selects sample sizes for the best prediction of power using simulation, a tutorial on how to conduct the MSPE for quadratic and interaction SEM (QISEM) using the powerNLSEM package is provided. Power estimation is demonstrated for four methods, latent moderated structural equations (LMS), the unconstrained product indicator (UPI), a simple factor score regression (FSR), and a scale regression (SR) approach to QISEM. In two simulation studies, we highlight the performance of the MSPE for all four methods applied to two QISEM with varying complexity and reliability. Further, we justify the settings of the newly developed adaptive search algorithm via performance evaluations using simulation. Overall, the MSPE using the adaptive approach performs well in terms of bias and Type I error rates.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8897-8931"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525415/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142279942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna M Langener, Björn S Siepe, Mahmoud Elsherif, Koen Niemeijer, Pia K Andresen, Samir Akre, Laura F Bringmann, Zachary D Cohen, Nathaniel R Choukas, Konstantin Drexl, Luisa Fassi, James Green, Tabea Hoffmann, Raj R Jagesar, Martien J H Kas, Sebastian Kurten, Ramona Schoedel, Gert Stulp, Georgia Turner, Nicholas C Jacobson
{"title":"A template and tutorial for preregistering studies using passive smartphone measures.","authors":"Anna M Langener, Björn S Siepe, Mahmoud Elsherif, Koen Niemeijer, Pia K Andresen, Samir Akre, Laura F Bringmann, Zachary D Cohen, Nathaniel R Choukas, Konstantin Drexl, Luisa Fassi, James Green, Tabea Hoffmann, Raj R Jagesar, Martien J H Kas, Sebastian Kurten, Ramona Schoedel, Gert Stulp, Georgia Turner, Nicholas C Jacobson","doi":"10.3758/s13428-024-02474-5","DOIUrl":"10.3758/s13428-024-02474-5","url":null,"abstract":"<p><p>Passive smartphone measures hold significant potential and are increasingly employed in psychological and biomedical research to capture an individual's behavior. These measures involve the near-continuous and unobtrusive collection of data from smartphones without requiring active input from participants. For example, GPS sensors are used to determine the (social) context of a person, and accelerometers to measure movement. However, utilizing passive smartphone measures presents methodological challenges during data collection and analysis. Researchers must make multiple decisions when working with such measures, which can result in different conclusions. Unfortunately, the transparency of these decision-making processes is often lacking. The implementation of open science practices is only beginning to emerge in digital phenotyping studies and varies widely across studies. Well-intentioned researchers may fail to report on some decisions due to the variety of choices that must be made. To address this issue and enhance reproducibility in digital phenotyping studies, we propose the adoption of preregistration as a way forward. Although there have been some attempts to preregister digital phenotyping studies, a template for registering such studies is currently missing. This could be problematic due to the high level of complexity that requires a well-structured template. Therefore, our objective was to develop a preregistration template that is easy to use and understandable for researchers. Additionally, we explain this template and provide resources to assist researchers in making informed decisions regarding data collection, cleaning, and analysis. Overall, we aim to make researchers' choices explicit, enhance transparency, and elevate the standards for studies utilizing passive smartphone measures.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8289-8307"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525430/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141900815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edgar Erdfelder, Julian Quevedo Pütter, Martin Schnuerch
{"title":"On aggregation invariance of multinomial processing tree models.","authors":"Edgar Erdfelder, Julian Quevedo Pütter, Martin Schnuerch","doi":"10.3758/s13428-024-02497-y","DOIUrl":"10.3758/s13428-024-02497-y","url":null,"abstract":"<p><p>Multinomial processing tree (MPT) models are prominent and frequently used tools to model and measure cognitive processes underlying responses in many experimental paradigms. Although MPT models typically refer to cognitive processes within single individuals, they have often been applied to group data aggregated across individuals. We investigate the conditions under which MPT analyses of aggregate data make sense. After introducing the notions of structural and empirical aggregation invariance of MPT models, we show that any MPT model that holds at the level of single individuals must also hold at the aggregate level when it is both structurally and empirically aggregation invariant. Moreover, group-level parameters of aggregation-invariant MPT models are equivalent to the expected values (i.e., means) of the corresponding individual parameters. To investigate the robustness of MPT results for aggregate data when one or both invariance conditions are violated, we additionally performed a series of simulation studies, systematically manipulating (1) the sample sizes in different trees of the model, (2) model parameterization, (3) means and variances of crucial model parameters, and (4) their correlations with other parameters of the respective MPT model. Overall, our results show that MPT parameter estimates based on aggregate data are trustworthy under rather general conditions, provided that a few preconditions are met.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8677-8694"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525265/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142456954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Tomić, Dagmar Adamcová, Máté Fehér, Paul M Bays
{"title":"Dissecting the components of error in analogue report tasks.","authors":"Ivan Tomić, Dagmar Adamcová, Máté Fehér, Paul M Bays","doi":"10.3758/s13428-024-02453-w","DOIUrl":"10.3758/s13428-024-02453-w","url":null,"abstract":"<p><p>Over the last two decades, the analogue report task has become a standard method for measuring the fidelity of visual representations across research domains including perception, attention, and memory. Despite its widespread use, there has been no methodical investigation of the different task parameters that might contribute to response variability. To address this gap, we conducted two experiments manipulating components of a typical analogue report test of memory for colour hue. We found that human response errors were independently affected by changes in storage and maintenance requirements of the task, demonstrated by a strong effect of set size even in the absence of a memory delay. In contrast, response variability remained unaffected by physical size of the colour wheel, implying negligible contribution of motor noise to task performance, or by its chroma radius, highlighting non-uniformity of the standard colour space. Comparing analogue report to a matched forced-choice task, we found variation in adjustment criterion made a limited contribution to analogue report variability, becoming meaningful only with low representational noise. Our findings validate the analogue report task as a robust measure of representational fidelity for most purposes, while also quantifying non-representational sources of noise that would limit its reliability in specialized settings.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8196-8213"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141557939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Walter F Bischof, Nicola C Anderson, Alan Kingstone
{"title":"A tutorial: Analyzing eye and head movements in virtual reality.","authors":"Walter F Bischof, Nicola C Anderson, Alan Kingstone","doi":"10.3758/s13428-024-02482-5","DOIUrl":"10.3758/s13428-024-02482-5","url":null,"abstract":"<p><p>This tutorial provides instruction on how to use the eye tracking technology built into virtual reality (VR) headsets, emphasizing the analysis of head and eye movement data when an observer is situated in the center of an omnidirectional environment. We begin with a brief description of how VR eye movement research differs from previous forms of eye movement research, as well as identifying some outstanding gaps in the current literature. We then introduce the basic methodology used to collect VR eye movement data both in general and with regard to the specific data that we collected to illustrate different analytical approaches. We continue with an introduction of the foundational ideas regarding data analysis in VR, including frames of reference, how to map eye and head position, and event detection. In the next part, we introduce core head and eye data analyses focusing on determining where the head and eyes are directed. We then expand on what has been presented, introducing several novel spatial, spatio-temporal, and temporal head-eye data analysis techniques. We conclude with a reflection on what has been presented, and how the techniques introduced in this tutorial provide the scaffolding for extensions to more complex and dynamic VR environments.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8396-8421"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141905783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of individual differences in emotional word recognition: Insights from a large-scale lexical decision study.","authors":"Juan Haro, José Antonio Hinojosa, Pilar Ferré","doi":"10.3758/s13428-024-02488-z","DOIUrl":"10.3758/s13428-024-02488-z","url":null,"abstract":"<p><p>This work presents a large lexical decision mega-study in Spanish, with 918 participants and 7500 words, focusing on emotional content and individual differences. The main objective was to investigate how emotional valence and arousal influence word recognition, controlling for a large number of confounding variables. In addition, as a unique contribution, the study examined the modulation of these effects by individual differences. Results indicated a significant effect of valence and arousal on lexical decision times, with an interaction between these variables. A linear effect of valence was observed, with slower recognition times for negative words and faster recognition times for positive words. In addition, arousal showed opposite effects in positive and negative words. Importantly, the effect of emotional variables was affected by personality traits (extroversion, conscientiousness and openness to experience), age and gender, challenging the 'one-size-fits-all' interpretation of emotional word processing. All data collected in the study is available to the research community: https://osf.io/cbtqy . This includes data from each participant (RTs, errors and individual differences scores), as well as values of concreteness (n = 1690), familiarity (n = 1693) and age of acquisition (n = 2171) of the words collected exclusively for this study. This is a useful resource for researchers interested not only in emotional word processing, but also in lexical processing in general and the influence of individual differences.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8501-8520"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex Koch, Austin Smith, Susan T Fiske, Andrea E Abele, Naomi Ellemers, Vincent Yzerbyt
{"title":"Validating a brief measure of four facets of social evaluation.","authors":"Alex Koch, Austin Smith, Susan T Fiske, Andrea E Abele, Naomi Ellemers, Vincent Yzerbyt","doi":"10.3758/s13428-024-02489-y","DOIUrl":"10.3758/s13428-024-02489-y","url":null,"abstract":"<p><p>Five studies (N = 7972) validated a brief measure and model of four facets of social evaluation (friendliness and morality as horizontal facets; ability and assertiveness as vertical facets). Perceivers expressed their personal impressions or estimated society's impression of different types of targets (i.e., envisioned or encountered groups or individuals) and numbers of targets (i.e., between six and 100) in the separate, items-within-target mode or the joint, targets-within-item mode. Factor analyses confirmed that a two-items-per-facet measure fit the data well and better than a four-items-per-dimension measure that captured the Big Two model (i.e., no facets, just the horizontal and vertical dimensions). As predicted, the correlation between the two horizontal facets and between the two vertical facets was higher than the correlations between any horizontal facet and any vertical facet. Perceivers' evaluations of targets on each facet were predictors of unique and relevant behavior intentions. Perceiving a target as more friendly, moral, able, and assertive increased the likelihood of relying on the target's loyalty, fairness, intellect, and hubris in an economic game, respectively. These results establish the external, internal, convergent, discriminant, and predictive validity of the brief measure and model of four facets of social evaluation.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8521-8539"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Inuggi, Nicola Domenici, Alessia Tonelli, Monica Gori
{"title":"PsySuite: An android application designed to perform multimodal psychophysical testing.","authors":"Alberto Inuggi, Nicola Domenici, Alessia Tonelli, Monica Gori","doi":"10.3758/s13428-024-02475-4","DOIUrl":"10.3758/s13428-024-02475-4","url":null,"abstract":"<p><p>In behavioral sciences, there is growing concern about the inflation of false-positive rates due to the amount of under-powered studies that have been shared in the past years. While problematic, having the possibility to recruit (lots of) participants (for a lot of time) is realistically not achievable for many research facilities. Factors that hinder the reaching of optimal sample sizes are, to name but a few, research costs, participants' availability and commitment, and logistics. We challenge these issues by introducing PsySuite, an Android app designed to foster a remote approach to multimodal behavioral testing. To validate PsySuite, we first evaluated its ability to generate stimuli appropriate to rigorous psychophysical testing, measuring both the app's accuracy (i.e., stimuli's onset, offset, and multimodal simultaneity) and precision (i.e., the stability of a given pattern across trials), using two different smartphone models. We then evaluated PsySuite's ability to replicate perceptual performances obtained using a classic psychophysical paradigm, comparing sample data collected with the app against those measured via a PC-based setup. Our results showed that PsySuite could accurately reproduce stimuli with a minimum duration of 7 ms, 17 ms, and 30 ms for the auditory, visual, and tactile modalities, respectively, and that perceptual performances obtained with PsySuite were consistent with the perceptual behavior observed using the classical setup. Combined with the high accessibility inherently supported by PsySuite, here we ought to share the app to further boost psychophysical research, aiming at setting it to a cheap, user-friendly, and portable level.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8308-8329"},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525261/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141974970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}