{"title":"Individual differences in online research: Comparing lab-based and online administration of a psycholinguistic battery of linguistic and domain-general skills.","authors":"Kyla McConnell, Florian Hintz, Antje S Meyer","doi":"10.3758/s13428-024-02533-x","DOIUrl":"https://doi.org/10.3758/s13428-024-02533-x","url":null,"abstract":"<p><p>Experimental psychologists and psycholinguists increasingly turn to online research for data collection due to the ease of sampling many diverse participants in parallel. Online research has shown promising validity and consistency, but is it suitable for all paradigms? Specifically, is it reliable enough for individual differences research? The current paper reports performance on 15 tasks from a psycholinguistic individual differences battery, including timed and untimed assessments of linguistic abilities, as well as domain-general skills. From a demographically homogenous sample of young Dutch people, 149 participants participated in the lab study, and 515 participated online. Our results indicate that there is no reason to assume that participants tested online will underperform compared to lab-based testing, though they highlight the importance of motivation and the potential for external help (e.g., through looking up answers) online. Overall, we conclude that there is reason for optimism in the future of online research into individual differences.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"22"},"PeriodicalIF":4.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142863091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robert C A Bendall, Sam Royle, James Dodds, Hugh Watmough, Jamie C Gillman, David Beevers, Simon Cassidy, Ben Short, Paige Metcalfe, Michael J Lomas, Draco Graham-Kevan, Samantha E A Gregory
{"title":"The Salford Nature Environments Database (SNED): an open-access database of standardized high-quality pictures from natural environments.","authors":"Robert C A Bendall, Sam Royle, James Dodds, Hugh Watmough, Jamie C Gillman, David Beevers, Simon Cassidy, Ben Short, Paige Metcalfe, Michael J Lomas, Draco Graham-Kevan, Samantha E A Gregory","doi":"10.3758/s13428-024-02556-4","DOIUrl":"https://doi.org/10.3758/s13428-024-02556-4","url":null,"abstract":"<p><p>The growing interest in harnessing natural environments to enhance mental health, including cognitive functioning and mood, has yielded encouraging results in initial studies. Given that images of nature have demonstrated similar benefits, they are frequently employed as proxies for real-world environments. To ensure precision and control, researchers often manipulate images of natural environments. The effectiveness of this approach relies on standardization of imagery, and therefore, inconsistency in methods and stimuli has limited the synthesis of research findings in the area. Responding to these limitations, the current paper introduces the Salford Nature Environments Database (SNED), a standardized database of natural images created to support ongoing research into the benefits of nature exposure. The SNED currently exists as the most comprehensive nature image database available, comprising 500 high-quality, standardized photographs capturing a variety of possible natural environments across the seasons. It also includes normative scores for user-rated (801 participants) characteristics of fascination, refuge and prospect, compatibility, preference, valence, arousal, and approach-avoidance, as well as data on physical properties of the images, specifically luminance, contrast, entropy, CIELAB colour space parameter values, and fractal dimensions. All image ratings and content detail, along with participant details, are freely available online. Researchers are encouraged to use this open-access database in accordance with the specific aims and design of their study. The SNED represents a valuable resource for continued research in areas such as nature-based therapy, social prescribing, and experimental approaches investigating underlying mechanisms that help explain how natural environments improve mental health and wellbeing.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"21"},"PeriodicalIF":4.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142863092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ignace T C Hooge, Roy S Hessels, Diederick C Niehorster, Richard Andersson, Marta K Skrok, Robert Konklewski, Patrycjusz Stremplewski, Maciej Nowakowski, Szymon Tamborski, Anna Szkulmowska, Maciej Szkulmowski, Marcus Nyström
{"title":"Eye tracker calibration: How well can humans refixate a target?","authors":"Ignace T C Hooge, Roy S Hessels, Diederick C Niehorster, Richard Andersson, Marta K Skrok, Robert Konklewski, Patrycjusz Stremplewski, Maciej Nowakowski, Szymon Tamborski, Anna Szkulmowska, Maciej Szkulmowski, Marcus Nyström","doi":"10.3758/s13428-024-02564-4","DOIUrl":"https://doi.org/10.3758/s13428-024-02564-4","url":null,"abstract":"<p><p>Irrespective of the precision, the inaccuracy of a pupil-based eye tracker is about 0.5 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> . This paper delves into two factors that potentially increase the inaccuracy of the gaze signal, namely, 1) Pupil-size changes and the pupil-size artefact (PSA) and 2) the putative inability of experienced individuals to precisely refixate a visual target. Experiment 1 utilizes a traditional pupil-CR eye tracker, while Experiment 2 employs a retinal eye tracker, the FreezeEye tracker, eliminating the pupil-based estimation. Results reveal that the PSA significantly affects gaze accuracy, introducing up to 0.5 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> inaccuracies during calibration and validation. Corrections based on the relation between pupil size and apparent gaze shift substantially reduce inaccuracies, underscoring the PSA's influence on eye-tracking quality. Conversely, Experiment 2 demonstrates humans' precise refixation abilities, suggesting that the accuracy of the gaze signal is not limited by human refixation inconsistencies.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"23"},"PeriodicalIF":4.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142863088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bruno Michelot, Alexandra Corneyllie, Marc Thevenet, Stefan Duffner, Fabien Perrin
{"title":"A modular machine learning tool for holistic and fine-grained behavioral analysis.","authors":"Bruno Michelot, Alexandra Corneyllie, Marc Thevenet, Stefan Duffner, Fabien Perrin","doi":"10.3758/s13428-024-02511-3","DOIUrl":"https://doi.org/10.3758/s13428-024-02511-3","url":null,"abstract":"<p><p>Artificial intelligence techniques offer promising avenues for exploring human body features from videos, yet no freely accessible tool has reliably provided holistic and fine-grained behavioral analyses to date. To address this, we developed a machine learning tool based on a two-level approach: a first lower-level processing using computer vision for extracting fine-grained and comprehensive behavioral features such as skeleton or facial points, gaze, and action units; a second level of machine learning classification coupled with explainability providing modularity, to determine which behavioral features are triggered by specific environments. To validate our tool, we filmed 16 participants across six conditions, varying according to the presence of a person (\"Pers\"), a sound (\"Snd\"), or silence (\"Rest\"), and according to emotional levels using self-referential (\"Self\") and control (\"Ctrl\") stimuli. We demonstrated the effectiveness of our approach by extracting and correcting behavior from videos using two computer vision software (OpenPose and OpenFace) and by training two algorithms (XGBoost and long short-term memory [LSTM]) to differentiate between experimental conditions. High classification rates were achieved for \"Pers\" conditions versus \"Snd\" or \"Rest\" (AUC = 0.8-0.9), with explainability revealing actions units and gaze as key features. Additionally, moderate classification rates were attained for \"Snd\" versus \"Rest\" (AUC = 0.7), attributed to action units, limbs and head points, as well as for \"Self\" versus \"Ctrl\" (AUC = 0.7-0.8), due to facial points. These findings were consistent with a more conventional hypothesis-driven approach. Overall, our study suggests that our tool is well suited for holistic and fine-grained behavioral analysis and offers modularity for extension into more complex naturalistic environments.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"24"},"PeriodicalIF":4.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142863176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"5956 German affective norms for atmospheres in organizations (GANAiO).","authors":"Anna Eifert, Christian Julmi","doi":"10.3758/s13428-024-02566-2","DOIUrl":"https://doi.org/10.3758/s13428-024-02566-2","url":null,"abstract":"<p><p>This article develops a comprehensive database comprising 5956 German affective norms specifically tailored for the study of organizational atmospheres through computational verbal language analysis. This dictionary adopts both dimensional and categorical approaches. The theoretical foundation of this study is the circumplex model of affective atmospheres. Similar to established methodologies, each word is rated based on the dimensions of valence and arousal. Going beyond the dimensional approach, this article introduces a classification system with 11 distinct atmospheric categories, assigning the words to their corresponding categories. This dictionary represents the first attempt to apply computer-aided text analysis (CATA) to the study of organizational atmospheres, providing a practical tool to support research in this developing area.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"20"},"PeriodicalIF":4.6,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reliably measuring learning-dependent distractor suppression with eye tracking.","authors":"Andy J Kim, Laurent Grégoire, Brian A Anderson","doi":"10.3758/s13428-024-02552-8","DOIUrl":"https://doi.org/10.3758/s13428-024-02552-8","url":null,"abstract":"<p><p>In the field of psychological science, behavioral performance in computer-based cognitive tasks often exhibits poor reliability. The absence of reliable measures of cognitive processes contributes to non-reproducibility in the field and impedes the investigation of individual differences. Specifically in visual search paradigms, response time-based measures have shown poor test-retest reliability and internal consistency across attention capture and distractor suppression, but one study has demonstrated the potential for oculomotor measures to exhibit superior reliability. Therefore, in this study, we investigated three datasets to compare the reliability of learning-dependent distractor suppression measured via distractor fixations (oculomotor capture) and latency to fixate the target (fixation times). Our findings reveal superior split-half reliability of oculomotor capture compared to that of fixation times regardless of the critical distractor comparison, with the reliability of oculomotor capture in most cases falling within the range that is acceptable for the investigation of individual differences. We additionally find that older adults have superior oculomotor reliability compared with young adults, potentially addressing a significant limitation in the aging literature of high variability in response time measures due to slower responses. Our findings highlight the utility of measuring eye movements in the pursuit of reliable indicators of distractor processing and the need to further test and develop additional measures in other sensory domains to maximize statistical power, reliability, and reproducibility.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"18"},"PeriodicalIF":4.6,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MacBehaviour: An R package for behavioural experimentation on large language models.","authors":"Xufeng Duan, Shixuan Li, Zhenguang G Cai","doi":"10.3758/s13428-024-02524-y","DOIUrl":"https://doi.org/10.3758/s13428-024-02524-y","url":null,"abstract":"<p><p>The study of large language models (LLMs) and LLM-powered chatbots has gained significant attention in recent years, with researchers treating LLMs as participants in psychological experiments. To facilitate this research, we developed an R package called \"MacBehaviour \" ( https://github.com/xufengduan/MacBehaviour ), which interacts with over 100 LLMs, including OpenAI's GPT family, the Claude family, Gemini, Llama family, and other open-weight models. The package streamlines the processes of LLM behavioural experimentation by providing a comprehensive set of functions for experiment design, stimuli presentation, model behaviour manipulation, and logging responses and token probabilities. With a few lines of code, researchers can seamlessly set up and conduct psychological experiments, making LLM behaviour studies highly accessible. To validate the utility and effectiveness of \"MacBehaviour,\" we conducted three experiments on GPT-3.5 Turbo, Llama-2-7b-chat-hf, and Vicuna-1.5-13b, replicating the sound-gender association in LLMs. The results consistently demonstrated that these LLMs exhibit human-like tendencies to infer gender from novel personal names based on their phonology, as previously shown by Cai et al. (2024). In conclusion, \"MacBehaviour\" is a user-friendly R package that simplifies and standardises the experimental process for machine behaviour studies, offering a valuable tool for researchers in this field.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"19"},"PeriodicalIF":4.6,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carolina Raffaelli, Elena Bocchi, Zachary Estes, James S Adelman
{"title":"BRAND: Brand recognition and attitude norms database.","authors":"Carolina Raffaelli, Elena Bocchi, Zachary Estes, James S Adelman","doi":"10.3758/s13428-024-02525-x","DOIUrl":"https://doi.org/10.3758/s13428-024-02525-x","url":null,"abstract":"<p><p>Research involving brands has increased substantially in recent decades. However, no extensive and free dataset of consumer responses to branding stimuli exists. The present research develops and validates such a dataset, which we call the Brand Recognition and Attitude Norms Database (BRAND). BRAND is the most comprehensive set of methodologically transparent, freely available, research-relevant consumer responses to branding stimuli, with measures of familiarity (awareness), liking (attitudes), and memory (recognition) of more than 500 top brands and their logos, spanning 32 industries. BRAND includes 5,356 primary datapoints aggregated from 244,400 raw datapoints (i.e., individual familiarity, liking, and memory responses) collected from 2000 US-resident consumers in 2 years (i.e., 2020 and 2024). The data exhibit good reliability, face validity, external validity, robustness across samples and time, cross-validity, and discriminant validity. BRAND can be broadly useful for testing hypotheses involving responses to brands, and for selecting stimuli in any study involving brands or logos. Thus, BRAND can facilitate research not only in consumer behavior and psychology but also in several related academic disciplines (e.g., economics, management, marketing).</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"17"},"PeriodicalIF":4.6,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142833525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taraz G Lee, Jacob Sellers, John Jonides, Han Zhang
{"title":"The forced-response method: A new chronometric approach to measure conflict processing.","authors":"Taraz G Lee, Jacob Sellers, John Jonides, Han Zhang","doi":"10.3758/s13428-024-02516-y","DOIUrl":"https://doi.org/10.3758/s13428-024-02516-y","url":null,"abstract":"<p><p>Despite long-standing concerns about the use of free reaction times (RTs) in cognitive psychology, they remain a prevalent measure of conflict resolution. This report presents the forced-response method as a fresh approach to examine speed-accuracy trade-off functions (SATs) in conflict tasks. The method involves fixing the overall response time, varying the onset of stimuli, and observing response expression. We applied this method to an arrow flanker task. By systematically varying the time between stimulus onset and response, we reveal a comprehensive time course of the flanker interference effect that is rarely observed in previous literature. We further show that influential manipulations observed in free-RT paradigms similarly affect accuracy within the forced-response technique, suggesting that the forced-response method retains the core cognitive processing characteristics of traditional free-RT conflict tasks. As a behavioral method that examines the time course of cognitive processing, the forced-response method provides a novel and more nuanced look into the dynamics of conflict resolution.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"15"},"PeriodicalIF":4.6,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142816891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Roy S Hessels, Antje Nuthmann, Marcus Nyström, Richard Andersson, Diederick C Niehorster, Ignace T C Hooge
{"title":"The fundamentals of eye tracking part 1: The link between theory and research question.","authors":"Roy S Hessels, Antje Nuthmann, Marcus Nyström, Richard Andersson, Diederick C Niehorster, Ignace T C Hooge","doi":"10.3758/s13428-024-02544-8","DOIUrl":"10.3758/s13428-024-02544-8","url":null,"abstract":"<p><p>Eye tracking technology has become increasingly prevalent in scientific research, offering unique insights into oculomotor and cognitive processes. The present article explores the relationship between scientific theory, the research question, and the use of eye-tracking technology. It aims to guide readers in determining if eye tracking is suitable for their studies and how to formulate relevant research questions. Examples from research on oculomotor control, reading, scene perception, task execution, visual expertise, and instructional design are used to illustrate the connection between theory and eye-tracking data. These examples may serve as inspiration to researchers new to eye tracking. In summarizing the examples, three important considerations emerge: (1) whether the study focuses on describing eye movements or uses them as a proxy for e.g., perceptual, or cognitive processes, (2) the logical chain from theory to predictions, and (3) whether the study is of an observational or idea-testing nature. We provide a generic scheme and a set of specific questions that may help researchers formulate and explicate their research question using eye tracking.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"16"},"PeriodicalIF":4.6,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638287/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142816893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}