Biao Chen, Junjie Bu, Xu Jiang, Ping Wang, Yan Xie, Zhuoyun Wang, Zhen Liang, Shengzhao Zhang
{"title":"The discrepancy in timing between synchronous signals and visual stimulation should not be underestimated.","authors":"Biao Chen, Junjie Bu, Xu Jiang, Ping Wang, Yan Xie, Zhuoyun Wang, Zhen Liang, Shengzhao Zhang","doi":"10.3758/s13428-024-02382-8","DOIUrl":"10.3758/s13428-024-02382-8","url":null,"abstract":"<p><p>Response latency is a critical parameter in studying human behavior, representing the time interval between the onset of stimulus and the response. However, different time between devices can introduce errors. Serial port synchronization signal can mitigate this, but limited information is available regarding their accuracy. Optical signals offer another option, but the difference in the positioning of optical signals and visual stimuli can introduce errors, and there have been limited reports of error reduction. This study aims to investigate methods for reducing the time errors. We used the Psychtoolbox to generate visual stimuli and serial port synchronization signals to explore their accuracy. Subsequently, we proposed a calibration formula to minimize the error between optical signals and visual stimuli. The findings are as follows: Firstly, the serial port synchronization signal presenting precedes visual stimulation, with a smaller lead time observed at higher refresh rates. Secondly, the lead time increases as the stimulus position deviates to the right and downwards. In Linux and IOPort(), serial port synchronization signals exhibited greater accuracy. Considering the poor accuracy and the multiple influencing factors associated with serial port synchronization signals, it is recommended to use optical signals to complete time synchronization. The results indicate that under the darkening process, the time error is - 0.23 ~ 0.08 ms (mean). This calibration formula can help measure the response latency accurately. This study provides valuable insights for optimizing experimental design and improving the accuracy of response latency. Although it only involves visual stimuli, the methods and results of this study can still serve as a reference.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140130648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Noah S Okada, Katherine L McNeely-White, Anne M Cleary, Brooke N Carlaw, Daniel L Drane, Thomas D Parsons, Timothy McMahan, Joseph Neisser, Nigel P Pedersen
{"title":"A virtual reality paradigm with dynamic scene stimuli for use in memory research.","authors":"Noah S Okada, Katherine L McNeely-White, Anne M Cleary, Brooke N Carlaw, Daniel L Drane, Thomas D Parsons, Timothy McMahan, Joseph Neisser, Nigel P Pedersen","doi":"10.3758/s13428-023-02243-w","DOIUrl":"10.3758/s13428-023-02243-w","url":null,"abstract":"<p><p>Episodic memory may essentially be memory for one's place within a temporally unfolding scene from a first-person perspective. Given this, pervasively used static stimuli may only capture one small part of episodic memory. A promising approach for advancing the study of episodic memory is immersing participants within varying scenes from a first-person perspective. We present a pool of distinct scene stimuli for use in virtual environments and a paradigm that is implementable across varying levels of immersion on multiple virtual reality (VR) platforms and adaptable to studying various aspects of scene and episodic memory. In our task, participants are placed within a series of virtual environments from a first-person perspective and guided through a virtual tour of scenes during a study phase and a test phase. In the test phase, some scenes share a spatial layout with studied scenes; others are completely novel. In three experiments with varying degrees of immersion, we measure scene recall, scene familiarity-detection during recall failure, the subjective experience of déjà vu, the ability to predict the next turn on a tour, the subjective sense of being able to predict the next turn on a tour, and the factors that influence memory search and the inclination to generate candidate recollective information. The level of first-person immersion mattered to multiple facets of episodic memory. The paradigm presents a useful means of advancing mechanistic understanding of how memory operates in realistic dynamic scene environments, including in combination with cognitive neuroscience methods such as functional magnetic resonance imaging and electrophysiology.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11018716/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41232061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Mielke, Gal Badihi, Kirsty E Graham, Charlotte Grund, Chie Hashimoto, Alex K Piel, Alexandra Safryghin, Katie E Slocombe, Fiona Stewart, Claudia Wilke, Klaus Zuberbühler, Catherine Hobaiter
{"title":"Many morphs: Parsing gesture signals from the noise.","authors":"Alexander Mielke, Gal Badihi, Kirsty E Graham, Charlotte Grund, Chie Hashimoto, Alex K Piel, Alexandra Safryghin, Katie E Slocombe, Fiona Stewart, Claudia Wilke, Klaus Zuberbühler, Catherine Hobaiter","doi":"10.3758/s13428-024-02368-6","DOIUrl":"10.3758/s13428-024-02368-6","url":null,"abstract":"<p><p>Parsing signals from noise is a general problem for signallers and recipients, and for researchers studying communicative systems. Substantial efforts have been invested in comparing how other species encode information and meaning, and how signalling is structured. However, research depends on identifying and discriminating signals that represent meaningful units of analysis. Early approaches to defining signal repertoires applied top-down approaches, classifying cases into predefined signal types. Recently, more labour-intensive methods have taken a bottom-up approach describing detailed features of each signal and clustering cases based on patterns of similarity in multi-dimensional feature-space that were previously undetectable. Nevertheless, it remains essential to assess whether the resulting repertoires are composed of relevant units from the perspective of the species using them, and redefining repertoires when additional data become available. In this paper we provide a framework that takes data from the largest set of wild chimpanzee (Pan troglodytes) gestures currently available, splitting gesture types at a fine scale based on modifying features of gesture expression using latent class analysis (a model-based cluster detection algorithm for categorical variables), and then determining whether this splitting process reduces uncertainty about the goal or community of the gesture. Our method allows different features of interest to be incorporated into the splitting process, providing substantial future flexibility across, for example, species, populations, and levels of signal granularity. Doing so, we provide a powerful tool allowing researchers interested in gestural communication to establish repertoires of relevant units for subsequent analyses within and between systems of communication.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362259/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140027307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scaling a common assessment of associative ability: Development and validation of a multiple-choice compound remote associates task.","authors":"Kendall A Mather, Sara J Weston, David M Condon","doi":"10.3758/s13428-024-02422-3","DOIUrl":"10.3758/s13428-024-02422-3","url":null,"abstract":"<p><p>The assessment of creativity as an individual difference has historically focused on divergent thinking, which is increasingly viewed as involving the associative processes that are also understood to be a key component of creative potential. Research on associative processes has proliferated in many sub-fields, often using Compound Remote Associates (CRA) tasks with an open response format and relatively small participant samples. In the present work, we introduce a new format that is more amenable to large-scale data collection in survey designs, and present evidence for the reliability and validity of CRA measures in general using multiple large samples. Study 1 uses a large, representative dataset (N = 1,323,480) to demonstrate strong unidimensionality and internal consistency (α = .97; ωt = .87), as well as links to individual differences in temperament, cognitive ability, occupation, and job characteristics. Study 2 uses an undergraduate sample (N = 685) to validate the use of a multiple-choice format relative to the traditional approach. Study 3 uses a crowdsourced sample (N = 357) to demonstrate high test-retest reliability of the items (r =.74). Finally, Study 4 uses a sample that overlaps with Study 1 (N = 1,502,922) to provide item response theory (IRT) parameters for a large set of high-quality CRA items that use a multiple-choice response mode, thus facilitating their use in future research on creativity, insight, and related topics.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141261464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Naming in a multilingual context: Norms for the ICMR-Manipal colour picture corpus in Kannada from the Indian context.","authors":"Rajath Shenoy, Lyndsey Nickels, Gopee Krishnan","doi":"10.3758/s13428-024-02439-8","DOIUrl":"10.3758/s13428-024-02439-8","url":null,"abstract":"<p><p>There have been many published picture corpora. However, more than half of the world's population speaks more than one language and, as language and culture are intertwined, some of the items from a picture corpus designed for a given language in a particular culture may not fit another culture (with the same or different language). There is also an awareness that language research can gain from the study of bi-/multilingual individuals who are immersed in multilingual contexts that foster inter-language interactions. Consequently, we developed a relatively large corpus of pictures (663 nouns, 96 verbs) and collected normative data from multilingual speakers of Kannada (a southern Indian language) on two picture-related measures (name agreement, image agreement) and three word-related measures (familiarity, subjective frequency, age of acquisition), and report objective visual complexity and syllable count of the words. Naming labels were classified into words from the target language (i.e., Kannada), cognates (borrowed from/shared with another language), translation equivalents, and elaborations. The picture corpus had > 85% mean concept agreement with multiple acceptable names (1-7 naming labels) for each concept. The mean percentage name agreement for the modal name was > 70%, with H-statistics of 0.89 for nouns and 0.52 for verbs. We also analyse the variability of responses highlighting the influence of bi-/multilingualism on (picture) naming. The picture corpus is freely accessible to researchers and clinicians. It may be used for future standardization with other languages of similar cultural contexts, and relevant items can be used in languages from different cultures, following suitable standardization.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362232/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141445375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiyang Chen, Heather R Daly, Mark A Pitt, Trisha Van Zandt
{"title":"Assessing the distortions introduced when calculating d': A simulation approach.","authors":"Yiyang Chen, Heather R Daly, Mark A Pitt, Trisha Van Zandt","doi":"10.3758/s13428-024-02447-8","DOIUrl":"10.3758/s13428-024-02447-8","url":null,"abstract":"<p><p>The discriminability measure <math><msup><mi>d</mi> <mo>'</mo></msup> </math> is widely used in psychology to estimate sensitivity independently of response bias. The conventional approach to estimate <math><msup><mi>d</mi> <mo>'</mo></msup> </math> involves a transformation from the hit rate and the false-alarm rate. When performance is perfect, correction methods must be applied to calculate <math><msup><mi>d</mi> <mo>'</mo></msup> </math> , but these corrections distort the estimate. In three simulation studies, we show that distortion in <math><msup><mi>d</mi> <mo>'</mo></msup> </math> estimation can arise from other properties of the experimental design (number of trials, sample size, sample variance, task difficulty) that, when combined with application of the correction method, make <math><msup><mi>d</mi> <mo>'</mo></msup> </math> distortion in any specific experiment design complex and can mislead statistical inference in the worst cases (Type I and Type II errors). To address this problem, we propose that researchers simulate <math><msup><mi>d</mi> <mo>'</mo></msup> </math> estimation to explore the impact of design choices, given anticipated or observed data. An R Shiny application is introduced that estimates <math><msup><mi>d</mi> <mo>'</mo></msup> </math> distortion, providing researchers the means to identify distortion and take steps to minimize its impact.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141496962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elisa S Buchberger, Chi T Ngo, Aaron Peikert, Andreas M Brandmaier, Markus Werkle-Bergner
{"title":"Estimating statistical power for structural equation models in developmental cognitive science: A tutorial in R : Power simulation for SEMs.","authors":"Elisa S Buchberger, Chi T Ngo, Aaron Peikert, Andreas M Brandmaier, Markus Werkle-Bergner","doi":"10.3758/s13428-024-02396-2","DOIUrl":"10.3758/s13428-024-02396-2","url":null,"abstract":"<p><p>Determining the compositional structure and dimensionality of psychological constructs lies at the heart of many research questions in developmental science. Structural equation modeling (SEM) provides a versatile framework for formalizing and estimating the relationships among multiple latent constructs. While the flexibility of SEM can accommodate many complex assumptions on the underlying structure of psychological constructs, it makes a priori estimation of statistical power and required sample size challenging. This difficulty is magnified when comparing non-nested SEMs, which prevents the use of traditional likelihood-ratio tests. Sample size estimates for SEM model fit comparisons typically rely on generic rules of thumb. Such heuristics can be misleading because statistical power in SEM depends on a variety of model properties. Here, we demonstrate a Monte Carlo simulation approach for estimating a priori statistical power for model selection when comparing non-nested models in an SEM framework. We provide a step-by-step guide to this approach based on an example from our memory development research in children.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362481/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141161225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating the proportions and latencies of reaction time outliers: A pooling method and case study of lexical decision tasks.","authors":"Jeff Miller","doi":"10.3758/s13428-024-02419-y","DOIUrl":"10.3758/s13428-024-02419-y","url":null,"abstract":"<p><p>A methodological problem in most reaction time (RT) studies is that some measured RTs may be outliers-that is, they may be very fast or very slow for reasons unconnected to the task-related processing of interest. Numerous ad hoc methods have been suggested to discriminate between such outliers and the valid RTs of interest, but it is extremely difficult to determine how well these methods work in practice because virtually nothing is known about the actual characteristics of outliers in real RT datasets. This article proposes a new method of pooling cumulative distribution function values for examining empirical RT distributions to assess both the proportions of outliers and their latencies relative to those of the valid RTs. As the method is developed, its strengths and weaknesses are examined using simulations based on previously suggested ad hoc models for RT outliers with particular assumed proportions and distributions of valid RTs and outliers. The method is then applied to several large RT datasets from lexical decision tasks, and the results provide the first empirically based description of outlier RTs. For these datasets, fewer than 1% of the RTs seem to be outliers, and the median outlier latency appears to be approximately 4-6 standard deviations of RT above the mean of the valid RT distribution.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362516/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141173763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Validating the IDRIS and IDRIA: Two infrequency/frequency scales for detecting careless and insufficient effort survey responders.","authors":"Cameron S Kay","doi":"10.3758/s13428-024-02452-x","DOIUrl":"10.3758/s13428-024-02452-x","url":null,"abstract":"<p><p>To detect careless and insufficient effort (C/IE) survey responders, researchers can use infrequency items - items that almost no one agrees with (e.g., \"When a friend greets me, I generally try to say nothing back\") - and frequency items - items that almost everyone agrees with (e.g., \"I try to listen when someone I care about is telling me something\"). Here, we provide initial validation for two sets of these items: the 14-item Invalid Responding Inventory for Statements (IDRIS) and the 6-item Invalid Responding Inventory for Adjectives (IDRIA). Across six studies (N<sub>1</sub> = 536; N<sub>2</sub> = 701; N<sub>3</sub> = 500; N<sub>4</sub> = 499; N<sub>5</sub> = 629, N<sub>6</sub> = 562), we found consistent evidence that the IDRIS is capable of detecting C/IE responding among statement-based scales (e.g., the HEXACO-PI-R) and the IDRIA is capable of detecting C/IE responding among both adjective-based scales (e.g., the Lex-20) and adjective-derived scales (e.g., the BFI-2). These findings were robust across different analytic approaches (e.g., Pearson correlations; Spearman rank-order correlations), different indices of C/IE responding (e.g., person-total correlations; semantic synonyms; horizontal cursor variability), and different sample types (e.g., US undergraduate students; Nigerian survey panel participants). Taken together, these results provide promising evidence for the utility of the IDRIS and IDRIA in detecting C/IE responding.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141557942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effective and adaptable: Four studies on the shortened attitude toward the color blue marker variable scale.","authors":"Brian Miller, Marcia Simmering, Elizabeth Ragland","doi":"10.3758/s13428-024-02465-6","DOIUrl":"10.3758/s13428-024-02465-6","url":null,"abstract":"<p><p>This research is an extension of the recent scale development efforts for the marker variable Attitude Toward the Color Blue (ATCB), which addresses the efficacy of multiple shorter permutations of the scale. The purpose of this study is to develop a shorter version of an ideal marker variable scale used to detect common method variance. Potential uses of the shorter version of ATCB include intensive longitudinal studies, implementation of experience sampling methodology, or any brief survey for which the original version might be cumbersome to implement repeatedly or appear very odd to the respondent when paired with only a few other substantive items. Study 1, uses all six-, five-, and four-item versions of ATCB in confirmatory factor analysis (CFA) marker technique tests on a bivariate relationship. Study 2 analyzes the best- and worst-performing versions of reduced lengths of the ATCB scale found in the first study on another bivariate relationship. Study 3 compares the original seven-item version, as well as randomly selected reduced length versions in a data set with 15 model relationships. Study 4 uses an experiment to determine the efficacy of providing respondents with one of three shorter ATCB scales in a model of three substantive variables. Our findings indicate that ATCB of different permutations and lengths can detect CMV successfully, and that researchers should choose the length of scale based on their survey length. We conclude that ATCB is adaptable for a variety of research situations, presenting it as a valuable tool for high-quality research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141619151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}