Walter F Bischof, Nicola C Anderson, Alan Kingstone
{"title":"A tutorial: Analyzing eye and head movements in virtual reality.","authors":"Walter F Bischof, Nicola C Anderson, Alan Kingstone","doi":"10.3758/s13428-024-02482-5","DOIUrl":"10.3758/s13428-024-02482-5","url":null,"abstract":"<p><p>This tutorial provides instruction on how to use the eye tracking technology built into virtual reality (VR) headsets, emphasizing the analysis of head and eye movement data when an observer is situated in the center of an omnidirectional environment. We begin with a brief description of how VR eye movement research differs from previous forms of eye movement research, as well as identifying some outstanding gaps in the current literature. We then introduce the basic methodology used to collect VR eye movement data both in general and with regard to the specific data that we collected to illustrate different analytical approaches. We continue with an introduction of the foundational ideas regarding data analysis in VR, including frames of reference, how to map eye and head position, and event detection. In the next part, we introduce core head and eye data analyses focusing on determining where the head and eyes are directed. We then expand on what has been presented, introducing several novel spatial, spatio-temporal, and temporal head-eye data analysis techniques. We conclude with a reflection on what has been presented, and how the techniques introduced in this tutorial provide the scaffolding for extensions to more complex and dynamic VR environments.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141905783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alex Koch, Austin Smith, Susan T Fiske, Andrea E Abele, Naomi Ellemers, Vincent Yzerbyt
{"title":"Validating a brief measure of four facets of social evaluation.","authors":"Alex Koch, Austin Smith, Susan T Fiske, Andrea E Abele, Naomi Ellemers, Vincent Yzerbyt","doi":"10.3758/s13428-024-02489-y","DOIUrl":"10.3758/s13428-024-02489-y","url":null,"abstract":"<p><p>Five studies (N = 7972) validated a brief measure and model of four facets of social evaluation (friendliness and morality as horizontal facets; ability and assertiveness as vertical facets). Perceivers expressed their personal impressions or estimated society's impression of different types of targets (i.e., envisioned or encountered groups or individuals) and numbers of targets (i.e., between six and 100) in the separate, items-within-target mode or the joint, targets-within-item mode. Factor analyses confirmed that a two-items-per-facet measure fit the data well and better than a four-items-per-dimension measure that captured the Big Two model (i.e., no facets, just the horizontal and vertical dimensions). As predicted, the correlation between the two horizontal facets and between the two vertical facets was higher than the correlations between any horizontal facet and any vertical facet. Perceivers' evaluations of targets on each facet were predictors of unique and relevant behavior intentions. Perceiving a target as more friendly, moral, able, and assertive increased the likelihood of relying on the target's loyalty, fairness, intellect, and hubris in an economic game, respectively. These results establish the external, internal, convergent, discriminant, and predictive validity of the brief measure and model of four facets of social evaluation.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ivan Tomić, Dagmar Adamcová, Máté Fehér, Paul M Bays
{"title":"Dissecting the components of error in analogue report tasks.","authors":"Ivan Tomić, Dagmar Adamcová, Máté Fehér, Paul M Bays","doi":"10.3758/s13428-024-02453-w","DOIUrl":"10.3758/s13428-024-02453-w","url":null,"abstract":"<p><p>Over the last two decades, the analogue report task has become a standard method for measuring the fidelity of visual representations across research domains including perception, attention, and memory. Despite its widespread use, there has been no methodical investigation of the different task parameters that might contribute to response variability. To address this gap, we conducted two experiments manipulating components of a typical analogue report test of memory for colour hue. We found that human response errors were independently affected by changes in storage and maintenance requirements of the task, demonstrated by a strong effect of set size even in the absence of a memory delay. In contrast, response variability remained unaffected by physical size of the colour wheel, implying negligible contribution of motor noise to task performance, or by its chroma radius, highlighting non-uniformity of the standard colour space. Comparing analogue report to a matched forced-choice task, we found variation in adjustment criterion made a limited contribution to analogue report variability, becoming meaningful only with low representational noise. Our findings validate the analogue report task as a robust measure of representational fidelity for most purposes, while also quantifying non-representational sources of noise that would limit its reliability in specialized settings.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141557939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of individual differences in emotional word recognition: Insights from a large-scale lexical decision study.","authors":"Juan Haro, José Antonio Hinojosa, Pilar Ferré","doi":"10.3758/s13428-024-02488-z","DOIUrl":"10.3758/s13428-024-02488-z","url":null,"abstract":"<p><p>This work presents a large lexical decision mega-study in Spanish, with 918 participants and 7500 words, focusing on emotional content and individual differences. The main objective was to investigate how emotional valence and arousal influence word recognition, controlling for a large number of confounding variables. In addition, as a unique contribution, the study examined the modulation of these effects by individual differences. Results indicated a significant effect of valence and arousal on lexical decision times, with an interaction between these variables. A linear effect of valence was observed, with slower recognition times for negative words and faster recognition times for positive words. In addition, arousal showed opposite effects in positive and negative words. Importantly, the effect of emotional variables was affected by personality traits (extroversion, conscientiousness and openness to experience), age and gender, challenging the 'one-size-fits-all' interpretation of emotional word processing. All data collected in the study is available to the research community: https://osf.io/cbtqy . This includes data from each participant (RTs, errors and individual differences scores), as well as values of concreteness (n = 1690), familiarity (n = 1693) and age of acquisition (n = 2171) of the words collected exclusively for this study. This is a useful resource for researchers interested not only in emotional word processing, but also in lexical processing in general and the influence of individual differences.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alberto Inuggi, Nicola Domenici, Alessia Tonelli, Monica Gori
{"title":"PsySuite: An android application designed to perform multimodal psychophysical testing.","authors":"Alberto Inuggi, Nicola Domenici, Alessia Tonelli, Monica Gori","doi":"10.3758/s13428-024-02475-4","DOIUrl":"10.3758/s13428-024-02475-4","url":null,"abstract":"<p><p>In behavioral sciences, there is growing concern about the inflation of false-positive rates due to the amount of under-powered studies that have been shared in the past years. While problematic, having the possibility to recruit (lots of) participants (for a lot of time) is realistically not achievable for many research facilities. Factors that hinder the reaching of optimal sample sizes are, to name but a few, research costs, participants' availability and commitment, and logistics. We challenge these issues by introducing PsySuite, an Android app designed to foster a remote approach to multimodal behavioral testing. To validate PsySuite, we first evaluated its ability to generate stimuli appropriate to rigorous psychophysical testing, measuring both the app's accuracy (i.e., stimuli's onset, offset, and multimodal simultaneity) and precision (i.e., the stability of a given pattern across trials), using two different smartphone models. We then evaluated PsySuite's ability to replicate perceptual performances obtained using a classic psychophysical paradigm, comparing sample data collected with the app against those measured via a PC-based setup. Our results showed that PsySuite could accurately reproduce stimuli with a minimum duration of 7 ms, 17 ms, and 30 ms for the auditory, visual, and tactile modalities, respectively, and that perceptual performances obtained with PsySuite were consistent with the perceptual behavior observed using the classical setup. Combined with the high accessibility inherently supported by PsySuite, here we ought to share the app to further boost psychophysical research, aiming at setting it to a cheap, user-friendly, and portable level.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525261/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141974970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philippe Goldammer, Peter Lucas Stöckli, Yannik Andrea Escher, Hubert Annen, Klaus Jonas, John Antonakis
{"title":"Careless responding detection revisited: Accuracy of direct and indirect measures.","authors":"Philippe Goldammer, Peter Lucas Stöckli, Yannik Andrea Escher, Hubert Annen, Klaus Jonas, John Antonakis","doi":"10.3758/s13428-024-02484-3","DOIUrl":"10.3758/s13428-024-02484-3","url":null,"abstract":"<p><p>To screen for careless responding, researchers have a choice between several direct measures (i.e., bogus items, requiring the respondent to choose a specific answer) and indirect measures (i.e., unobtrusive post hoc indices). Given the dearth of research in the area, we examined how well direct and indirect indices perform relative to each other. In five experimental studies, we investigated whether the detection rates of the measures are affected by contextual factors: severity of the careless response pattern, type of item keying, and type of item presentation. We fully controlled the information environment by experimentally inducing careless response sets under a variety of contextual conditions. In Studies 1 and 2, participants rated the personality of an actor that presented himself in a 5-min-long videotaped speech. In Studies 3, 4, and 5, participants had to rate their own personality across two measurements. With the exception of maximum longstring, intra-individual response variability, and individual contribution to model misfit, all examined indirect indices performed better than chance in most of the examined conditions. Moreover, indirect indices had detection rates as good as and, in many cases, better than the detection rates of direct measures. We therefore encourage researchers to use indirect indices, especially within-person consistency indices, instead of direct measures.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525390/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141987377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"People make mistakes: Obtaining accurate ground truth from continuous annotations of subjective constructs.","authors":"Brandon M Booth, Shrikanth S Narayanan","doi":"10.3758/s13428-024-02503-3","DOIUrl":"10.3758/s13428-024-02503-3","url":null,"abstract":"<p><p>Accurately representing changes in mental states over time is crucial for understanding their complex dynamics. However, there is little methodological research on the validity and reliability of human-produced continuous-time annotation of these states. We present a psychometric perspective on valid and reliable construct assessment, examine the robustness of interval-scale (e.g., values between zero and one) continuous-time annotation, and identify three major threats to validity and reliability in current approaches. We then propose a novel ground truth generation pipeline that combines emerging techniques for improving validity and robustness. We demonstrate its effectiveness in a case study involving crowd-sourced annotation of perceived violence in movies, where our pipeline achieves a .95 Spearman correlation in summarized ratings compared to a .15 baseline. These results suggest that highly accurate ground truth signals can be produced from continuous annotations using additional comparative annotation (e.g., a versus b) to correct structured errors, highlighting the need for a paradigm shift in robust construct measurement over time.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11525321/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142340234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aniek Sies, Lisa Doove, Kristof Meers, Elise Dusseldorp, Iven Van Mechelen
{"title":"Estimating optimal decision trees for treatment assignment: The case of K > 2 treatment alternatives.","authors":"Aniek Sies, Lisa Doove, Kristof Meers, Elise Dusseldorp, Iven Van Mechelen","doi":"10.3758/s13428-024-02470-9","DOIUrl":"10.3758/s13428-024-02470-9","url":null,"abstract":"<p><p>For many problems in clinical practice, multiple treatment alternatives are available. Given data from a randomized controlled trial or an observational study, an important challenge is to estimate an optimal decision rule that specifies for each client the most effective treatment alternative, given his or her pattern of pretreatment characteristics. In the present paper we will look for such a rule within the insightful family of classification trees. Unfortunately, however, there is dearth of readily accessible software tools for optimal decision tree estimation in the case of more than two treatment alternatives. Moreover, this primary tree estimation problem is also cursed with two secondary problems: a structural missingness in typical studies on treatment evaluation (because every individual is assigned to a single treatment alternative only), and a major issue of replicability. In this paper we propose solutions for both the primary and the secondary problems at stake. We evaluate the proposed solution in a simulation study, and illustrate with an application on the search for an optimal tree-based treatment regime in a randomized controlled trial on K = 3 different types of aftercare for younger women with early-stage breast cancer. We conclude by arguing that the proposed solutions may have relevance for several other classification problems inside and outside the domain of optimal treatment assignment.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142008163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Metrics for quantifying co-development at the individual level.","authors":"Ashley A Edwards, Yaacov Petscher","doi":"10.3758/s13428-024-02487-0","DOIUrl":"10.3758/s13428-024-02487-0","url":null,"abstract":"<p><p>Previous research on co-development has focused on modeling the relations at the group level; however, how individuals differ in co-development may provide important information as well. Recent work has used vector plots to visually explore individual differences in co-development; however, these judgements were made based on visual inspection of a vector plot rather than the calculation of metrics. Here we propose two metrics that can be used to quantify co-development at the individual level: the co-development change ratio (CCR) and the angle of co-development metric (ACM). CCR provides information about the symmetry of development, examining whether an individual grew at the same pace in one skill relative to peers as compared to growth in the other skill relative to peers. ACM represents the relative amount and direction of change on each skill. This paper provides a tutorial on how to calculate and interpret these two metrics for quantifying co-development at the individual level.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142131706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael J Kleinsasser, Ritesh Mistry, Hsing-Fang Hsieh, William J McCarthy, Trivellore Raghunathan
{"title":"Person explanatory multidimensional item response theory with the instrument package in R.","authors":"Michael J Kleinsasser, Ritesh Mistry, Hsing-Fang Hsieh, William J McCarthy, Trivellore Raghunathan","doi":"10.3758/s13428-024-02490-5","DOIUrl":"10.3758/s13428-024-02490-5","url":null,"abstract":"<p><p>We present the new R package instrument to perform Bayesian estimation of person explanatory multidimensional item response theory. The package implements an exploratory multidimensional item response theory model and a higher-order multidimensional item response theory model, a type of confirmatory multidimensional item response theory. Explanation of person parameters is accomplished by fixed and random effect linear regression models. Estimation is carried out using Hamiltonian Monte Carlo in Stan. In this article, we provide a detailed description of the models; we use the instrument package to demonstrate fitting explanatory item response models with fixed and random effects (i.e., mixed modeling) of person parameters in R; and, we perform a simulation study to evaluate the performance of our implementation of the models.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":null,"pages":null},"PeriodicalIF":4.6,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142071899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}