Alexander Mielke, Gal Badihi, Kirsty E Graham, Charlotte Grund, Chie Hashimoto, Alex K Piel, Alexandra Safryghin, Katie E Slocombe, Fiona Stewart, Claudia Wilke, Klaus Zuberbühler, Catherine Hobaiter
{"title":"Many morphs: Parsing gesture signals from the noise.","authors":"Alexander Mielke, Gal Badihi, Kirsty E Graham, Charlotte Grund, Chie Hashimoto, Alex K Piel, Alexandra Safryghin, Katie E Slocombe, Fiona Stewart, Claudia Wilke, Klaus Zuberbühler, Catherine Hobaiter","doi":"10.3758/s13428-024-02368-6","DOIUrl":"10.3758/s13428-024-02368-6","url":null,"abstract":"<p><p>Parsing signals from noise is a general problem for signallers and recipients, and for researchers studying communicative systems. Substantial efforts have been invested in comparing how other species encode information and meaning, and how signalling is structured. However, research depends on identifying and discriminating signals that represent meaningful units of analysis. Early approaches to defining signal repertoires applied top-down approaches, classifying cases into predefined signal types. Recently, more labour-intensive methods have taken a bottom-up approach describing detailed features of each signal and clustering cases based on patterns of similarity in multi-dimensional feature-space that were previously undetectable. Nevertheless, it remains essential to assess whether the resulting repertoires are composed of relevant units from the perspective of the species using them, and redefining repertoires when additional data become available. In this paper we provide a framework that takes data from the largest set of wild chimpanzee (Pan troglodytes) gestures currently available, splitting gesture types at a fine scale based on modifying features of gesture expression using latent class analysis (a model-based cluster detection algorithm for categorical variables), and then determining whether this splitting process reduces uncertainty about the goal or community of the gesture. Our method allows different features of interest to be incorporated into the splitting process, providing substantial future flexibility across, for example, species, populations, and levels of signal granularity. Doing so, we provide a powerful tool allowing researchers interested in gestural communication to establish repertoires of relevant units for subsequent analyses within and between systems of communication.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"6520-6537"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140027307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiyang Chen, Heather R Daly, Mark A Pitt, Trisha Van Zandt
{"title":"Assessing the distortions introduced when calculating d': A simulation approach.","authors":"Yiyang Chen, Heather R Daly, Mark A Pitt, Trisha Van Zandt","doi":"10.3758/s13428-024-02447-8","DOIUrl":"10.3758/s13428-024-02447-8","url":null,"abstract":"<p><p>The discriminability measure <math><msup><mi>d</mi> <mo>'</mo></msup> </math> is widely used in psychology to estimate sensitivity independently of response bias. The conventional approach to estimate <math><msup><mi>d</mi> <mo>'</mo></msup> </math> involves a transformation from the hit rate and the false-alarm rate. When performance is perfect, correction methods must be applied to calculate <math><msup><mi>d</mi> <mo>'</mo></msup> </math> , but these corrections distort the estimate. In three simulation studies, we show that distortion in <math><msup><mi>d</mi> <mo>'</mo></msup> </math> estimation can arise from other properties of the experimental design (number of trials, sample size, sample variance, task difficulty) that, when combined with application of the correction method, make <math><msup><mi>d</mi> <mo>'</mo></msup> </math> distortion in any specific experiment design complex and can mislead statistical inference in the worst cases (Type I and Type II errors). To address this problem, we propose that researchers simulate <math><msup><mi>d</mi> <mo>'</mo></msup> </math> estimation to explore the impact of design choices, given anticipated or observed data. An R Shiny application is introduced that estimates <math><msup><mi>d</mi> <mo>'</mo></msup> </math> distortion, providing researchers the means to identify distortion and take steps to minimize its impact.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"7728-7747"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141496962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validating the IDRIS and IDRIA: Two infrequency/frequency scales for detecting careless and insufficient effort survey responders.","authors":"Cameron S Kay","doi":"10.3758/s13428-024-02452-x","DOIUrl":"10.3758/s13428-024-02452-x","url":null,"abstract":"<p><p>To detect careless and insufficient effort (C/IE) survey responders, researchers can use infrequency items - items that almost no one agrees with (e.g., \"When a friend greets me, I generally try to say nothing back\") - and frequency items - items that almost everyone agrees with (e.g., \"I try to listen when someone I care about is telling me something\"). Here, we provide initial validation for two sets of these items: the 14-item Invalid Responding Inventory for Statements (IDRIS) and the 6-item Invalid Responding Inventory for Adjectives (IDRIA). Across six studies (N<sub>1</sub> = 536; N<sub>2</sub> = 701; N<sub>3</sub> = 500; N<sub>4</sub> = 499; N<sub>5</sub> = 629, N<sub>6</sub> = 562), we found consistent evidence that the IDRIS is capable of detecting C/IE responding among statement-based scales (e.g., the HEXACO-PI-R) and the IDRIA is capable of detecting C/IE responding among both adjective-based scales (e.g., the Lex-20) and adjective-derived scales (e.g., the BFI-2). These findings were robust across different analytic approaches (e.g., Pearson correlations; Spearman rank-order correlations), different indices of C/IE responding (e.g., person-total correlations; semantic synonyms; horizontal cursor variability), and different sample types (e.g., US undergraduate students; Nigerian survey panel participants). Taken together, these results provide promising evidence for the utility of the IDRIS and IDRIA in detecting C/IE responding.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"7790-7813"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141557942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effective and adaptable: Four studies on the shortened attitude toward the color blue marker variable scale.","authors":"Brian Miller, Marcia Simmering, Elizabeth Ragland","doi":"10.3758/s13428-024-02465-6","DOIUrl":"10.3758/s13428-024-02465-6","url":null,"abstract":"<p><p>This research is an extension of the recent scale development efforts for the marker variable Attitude Toward the Color Blue (ATCB), which addresses the efficacy of multiple shorter permutations of the scale. The purpose of this study is to develop a shorter version of an ideal marker variable scale used to detect common method variance. Potential uses of the shorter version of ATCB include intensive longitudinal studies, implementation of experience sampling methodology, or any brief survey for which the original version might be cumbersome to implement repeatedly or appear very odd to the respondent when paired with only a few other substantive items. Study 1, uses all six-, five-, and four-item versions of ATCB in confirmatory factor analysis (CFA) marker technique tests on a bivariate relationship. Study 2 analyzes the best- and worst-performing versions of reduced lengths of the ATCB scale found in the first study on another bivariate relationship. Study 3 compares the original seven-item version, as well as randomly selected reduced length versions in a data set with 15 model relationships. Study 4 uses an experiment to determine the efficacy of providing respondents with one of three shorter ATCB scales in a model of three substantive variables. Our findings indicate that ATCB of different permutations and lengths can detect CMV successfully, and that researchers should choose the length of scale based on their survey length. We conclude that ATCB is adaptable for a variety of research situations, presenting it as a valuable tool for high-quality research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"7985-8008"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141619151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anthony P Zanesco, Nicholas T Van Dam, Ekaterina Denkova, Amishi P Jha
{"title":"Measuring mind wandering with experience sampling during task performance: An item response theory investigation.","authors":"Anthony P Zanesco, Nicholas T Van Dam, Ekaterina Denkova, Amishi P Jha","doi":"10.3758/s13428-024-02446-9","DOIUrl":"10.3758/s13428-024-02446-9","url":null,"abstract":"<p><p>The tendency for individuals to mind wander is often measured using experience sampling methods in which probe questions embedded within computerized cognitive tasks attempt to catch episodes of off-task thought at random intervals during task performance. However, mind-wandering probe questions and response options are often chosen ad hoc and vary between studies with extant little guidance as to the psychometric consequences of these methodological decisions. In the present study, we examined the psychometric properties of several common approaches for assessing mind wandering using methods from item response theory (IRT). IRT latent modeling demonstrated that measurement information was generally distributed across the range of trait estimates according to when probes were presented in time. Probes presented earlier in time provided more information about individuals with greater tendency to mind wandering than probes presented later. Furthermore, mind-wandering ratings made on a continuous scale or using multiple categorical rating options provided more information about individuals' latent mind-wandering tendency - across a broader range of the trait continuum - than ratings dichotomized into on-task and off-task categories. In addition, IRT provided evidence that reports of \"task-related thoughts\" contribute to the task-focused dimension of the construct continuum, providing justification for studies conceptualizing these responses as a kind of task-related focus. Together, we hope these findings will help guide researchers hoping to maximize the measurement precision of their mind wandering assessment procedures.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"7707-7727"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362314/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141756851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Customizing Bayesian multivariate generalizability theory to mixed-format tests.","authors":"Zhehan Jiang, Jinying Ouyang, Dingjing Shi, Dexin Shi, Jihong Zhang, Lingling Xu, Fen Cai","doi":"10.3758/s13428-024-02472-7","DOIUrl":"10.3758/s13428-024-02472-7","url":null,"abstract":"<p><p>Mixed-format tests, which typically include dichotomous items and polytomously scored tasks, are employed to assess a wider range of knowledge and skills. Recent behavioral and educational studies have highlighted their practical importance and methodological developments, particularly within the context of multivariate generalizability theory. However, the diverse response types and complex designs of these tests pose significant analytical challenges when modeling data simultaneously. Current methods often struggle to yield reliable results, either due to the inappropriate treatment of different types of response data separately or the imposition of identical covariates across various response types. Moreover, there are few software packages or programs that offer customized solutions for modeling mixed-format tests, addressing these limitations. This tutorial provides a detailed example of using a Bayesian approach to model data collected from a mixed-format test, comprising multiple-choice questions and free-response tasks. The modeling was conducted using the Stan software within the R programming system, with Stan codes tailored to the structure of the test design, following the principles of multivariate generalizability theory. By further examining the effects of prior distributions in this example, this study demonstrates how the adaptability of Bayesian models to diverse test formats, coupled with their potential for nuanced analysis, can significantly advance the field of psychometric modeling.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8080-8090"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141787188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing child word associations to adult associative norms: Evidence for child-specific associations with a strong priming effect in 3-year-olds.","authors":"Nadine Fitzpatrick, Caroline Floccia","doi":"10.3758/s13428-024-02414-3","DOIUrl":"10.3758/s13428-024-02414-3","url":null,"abstract":"<p><p>Investigating how infants first establish relationships between words is a necessary step towards understanding how an interconnected network of semantic relationships develops in the adult lexical-semantic system. Stimuli selection for these child studies is critical since words must be both familiar and highly imageable. However, there has been a reliance on adult word association norms to inform stimuli selection in English infant studies to date, as no resource currently exists for child-specific word associations. We present three experiments that explore the strength of word-word relationships in 3-year-olds. Experiment 1 collected children's word associations (WA) (N = 150; female = 84, L1 = British English) and compared them to adult associative norms (Moss & Older, 1996; Nelson et al., 2004 (Behavior Research Methods, Instruments, & Computers, 36(3), 402-407)). Experiment 2 replicated WAs from Experiment 1 in an online adaptation of the task (N = 24: 13 female, L1 = British English). Both experiments indicated a high proportion of child-specific WAs not represented in adult norms (Moss & Older, 1996; Nelson et al., 2004 (Behavior Research Methods, Instruments, & Computers, 36(3), 402-407)). Experiment 3 tested noun-noun WAs from these responses in an online semantic priming study (N = 40: 19 female, L1 = British English) and found that association type modulated priming (F(2.57, 100.1) = 13.13, p <. 0001, generalized η<sup>2</sup> = .19). This research presents a resource of child-specific imageable noun-noun word pair stimuli suitable for testing young children in word recognition and semantic priming studies.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"7168-7218"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362254/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141305297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dominika Zaremba, Jarosław M Michałowski, Christian A Klöckner, Artur Marchewka, Małgorzata Wierzba
{"title":"Correction: Development and validation of the Emotional Climate Change Stories (ECCS) stimuli set.","authors":"Dominika Zaremba, Jarosław M Michałowski, Christian A Klöckner, Artur Marchewka, Małgorzata Wierzba","doi":"10.3758/s13428-024-02460-x","DOIUrl":"10.3758/s13428-024-02460-x","url":null,"abstract":"","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8158"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362478/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141436569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Model selection of GLMMs in the analysis of count data in single-case studies: A Monte Carlo simulation.","authors":"Haoran Li","doi":"10.3758/s13428-024-02464-7","DOIUrl":"10.3758/s13428-024-02464-7","url":null,"abstract":"<p><p>Generalized linear mixed models (GLMMs) have great potential to deal with count data in single-case experimental designs (SCEDs). However, applied researchers have faced challenges in making various statistical decisions when using such advanced statistical techniques in their own research. This study focused on a critical issue by investigating the selection of an appropriate distribution to handle different types of count data in SCEDs due to overdispersion and/or zero-inflation. To achieve this, I proposed two model selection frameworks, one based on calculating information criteria (AIC and BIC) and another based on utilizing a multistage-model selection procedure. Four data scenarios were simulated including Poisson, negative binominal (NB), zero-inflated Poisson (ZIP), and zero-inflated negative binomial (ZINB). The same set of models (i.e., Poisson, NB, ZIP, and ZINB) were fitted for each scenario. In the simulation, I evaluated 10 model selection strategies within the two frameworks by assessing the model selection bias and its consequences on the accuracy of the treatment effect estimates and inferential statistics. Based on the simulation results and previous work, I provide recommendations regarding which model selection methods should be adopted in different scenarios. The implications, limitations, and future research directions are also discussed.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"7963-7984"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141578887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mairead Shaw, Jason D Rights, Sonya S Sterba, Jessica Kay Flake
{"title":"Author Correction: r2mlm: An R package calculating R-squared measures for multilevel models.","authors":"Mairead Shaw, Jason D Rights, Sonya S Sterba, Jessica Kay Flake","doi":"10.3758/s13428-024-02431-2","DOIUrl":"10.3758/s13428-024-02431-2","url":null,"abstract":"","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"8157"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140853822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}