Behavior Research Methods最新文献

筛选
英文 中文
Quantitative comparison of a mobile, tablet-based eye-tracker and two stationary, video-based eye-trackers. 一种基于平板电脑的移动眼动仪和两种基于视频的固定眼动仪的定量比较。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-06 DOI: 10.3758/s13428-024-02542-w
Aylin König, Uwe Thomas, Frank Bremmer, Stefan Dowiasch
{"title":"Quantitative comparison of a mobile, tablet-based eye-tracker and two stationary, video-based eye-trackers.","authors":"Aylin König, Uwe Thomas, Frank Bremmer, Stefan Dowiasch","doi":"10.3758/s13428-024-02542-w","DOIUrl":"10.3758/s13428-024-02542-w","url":null,"abstract":"<p><p>The analysis of eye movements is a noninvasive, reliable and fast method to detect and quantify brain (dys)function. Here, we investigated the performance of two novel eye-trackers-the Thomas Oculus Motus-research mobile (TOM-rm) and the TOM-research stationary (TOM-rs)-and compared them with the performance of a well-established video-based eye-tracker, i.e., the EyeLink 1000 Plus (EL). The TOM-rm is a fully integrated, tablet-based mobile device that presents visual stimuli and records head-unrestrained eye movements at 30 Hz without additional infrared illumination. The TOM-rs is a stationary, video-based eye-tracker that records eye movements at either high spatial or high temporal resolution. We compared the performance of all three eye-trackers in two different behavioral tasks: pro- and anti-saccade and free viewing. We collected data from 30 human subjects while running all three eye-tracking devices in parallel. Parameters requiring a high spatial or temporal resolution (e.g., saccade latency or gain), as derived from the data, differed significantly between the EL and the TOM-rm in both tasks. Differences between results derived from the TOM-rs and the EL were most likely due to experimental conditions, which could not be optimized for both systems simultaneously. We conclude that the TOM-rm can be used for measuring basic eye-movement parameters, such as the error rate in a typical pro- and anti-saccade task, or the number and position of fixations in a visual foraging task, reliably at comparably low spatial and temporal resolution. The TOM-rs, on the other hand, can provide high-resolution oculomotor data at least on a par with an established reference system.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"45"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study. 眼动追踪的基本原理第4部分:进行眼动追踪研究的工具。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-06 DOI: 10.3758/s13428-024-02529-7
Diederick C Niehorster, Marcus Nyström, Roy S Hessels, Richard Andersson, Jeroen S Benjamins, Dan Witzner Hansen, Ignace T C Hooge
{"title":"The fundamentals of eye tracking part 4: Tools for conducting an eye tracking study.","authors":"Diederick C Niehorster, Marcus Nyström, Roy S Hessels, Richard Andersson, Jeroen S Benjamins, Dan Witzner Hansen, Ignace T C Hooge","doi":"10.3758/s13428-024-02529-7","DOIUrl":"10.3758/s13428-024-02529-7","url":null,"abstract":"<p><p>Researchers using eye tracking are heavily dependent on software and hardware tools to perform their studies, from recording eye tracking data and visualizing it, to processing and analyzing it. This article provides an overview of available tools for research using eye trackers and discusses considerations to make when choosing which tools to adopt for one's study.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"46"},"PeriodicalIF":4.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11703944/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142943481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Validation of the Emotionally Congruent and Incongruent Face-Body Static Set (ECIFBSS). 情绪一致与不一致脸-身体静态集的验证。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02550-w
Anne-Sophie Puffet, Simon Rigoulot
{"title":"Validation of the Emotionally Congruent and Incongruent Face-Body Static Set (ECIFBSS).","authors":"Anne-Sophie Puffet, Simon Rigoulot","doi":"10.3758/s13428-024-02550-w","DOIUrl":"10.3758/s13428-024-02550-w","url":null,"abstract":"<p><p>Frequently, we perceive emotional information through multiple channels (e.g., face, voice, posture). These cues interact, facilitating emotional perception when congruent (similar across channels) compared to incongruent (different). Most previous studies on this congruency effect used stimuli from different sets, compromising their quality. In this context, we created and validated a new static stimulus set (ECIFBSS) featuring 1952 facial and body expressions of basic emotions in congruent and incongruent situations. We photographed 40 actors expressing facial emotions and body postures (anger, disgust, happiness, neutral, fear, surprise, and sadness) in both congruent and incongruent situations. The validation was conducted in two parts. In the first part, 76 participants performed a recognition task on facial and bodily expressions separately. In the second part, 40 participants performed the same recognition task, along with an evaluation of four features: intensity, authenticity, arousal, and valence. All emotions (face and body) were well recognized. Consistent with the literature, facial emotions were recognized better than body postures. Happiness was the most recognized facial emotion, while fear was the least. Among body expressions, anger had the highest recognition, while disgust was the least accurately recognized. Finally, facial and bodily expressions were considered moderately authentic, and the evaluation of intensity, valence, and arousal aligned with the dimensional model. The ECIFBSS offers static stimuli for studying facial and body expressions of basic emotions, providing a new tool to explore integrating emotional information from various channels and their reciprocal influence.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"41"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception of emotion across cultures: Norms of valence, arousal, and sensory experience for 4923 Chinese words translated from English in Warriner et al. (2013). 跨文化情感感知:Warriner et al.(2013)对4923个汉译英词汇的效价、觉醒和感官体验规范。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02580-4
Wei Yi, Haitao Xu, Kaiwen Man
{"title":"Perception of emotion across cultures: Norms of valence, arousal, and sensory experience for 4923 Chinese words translated from English in Warriner et al. (2013).","authors":"Wei Yi, Haitao Xu, Kaiwen Man","doi":"10.3758/s13428-024-02580-4","DOIUrl":"10.3758/s13428-024-02580-4","url":null,"abstract":"<p><p>Perception of emotion conveyed through language is influenced by embodied experiences obtained from social interactions, which may vary across different cultures. To explore cross-cultural differences in the perception of emotion between Chinese and English speakers, this study collected norms of valence and arousal from 322 native Mandarin speakers for 4923 Chinese words translated from Warriner et al., (Behavior Research Methods, 45, 1191-1207, 2013). Additionally, sensory experience ratings for each word were collected. Analysis demonstrated that the reliability of this dataset is satisfactory, as indicated by comparisons with previous datasets. We examined the distributions of valence and arousal for the entire dataset, as well as for positive and negative emotion categories. Further analysis suggested that valence, arousal, and sensory experience correlated with various psycholinguistic variables, including the number of syllables, number of strokes, imageability, familiarity, concreteness, frequency, and age of acquisition. Cross-language comparison indicated that native speakers of Chinese and English differ in their perception of emotional valence and arousal, largely due to cross-cultural variations associated with ecological, sociopolitical, and religious factors. This dataset will be a valuable resource for research examining the impact of emotional and sensory information on Chinese lexical processing, as well as for bilingual research investigating the interplay between language and emotion across different cultural contexts.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"43"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language models can segment narrative events similarly to humans. 大型语言模型可以像人类一样分割叙事事件。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02569-z
Sebastian Michelmann, Manoj Kumar, Kenneth A Norman, Mariya Toneva
{"title":"Large language models can segment narrative events similarly to humans.","authors":"Sebastian Michelmann, Manoj Kumar, Kenneth A Norman, Mariya Toneva","doi":"10.3758/s13428-024-02569-z","DOIUrl":"10.3758/s13428-024-02569-z","url":null,"abstract":"<p><p>Humans perceive discrete events such as \"restaurant visits\" and \"train rides\" in their continuous experience. One important prerequisite for studying human event perception is the ability of researchers to quantify when one event ends and another begins. Typically, this information is derived by aggregating behavioral annotations from several observers. Here, we present an alternative computational approach where event boundaries are derived using a large language model, GPT-3, instead of using human annotations. We demonstrate that GPT-3 can segment continuous narrative text into events. GPT-3-annotated events are significantly correlated with human event annotations. Furthermore, these GPT-derived annotations achieve a good approximation of the \"consensus\" solution (obtained by averaging across human annotations); the boundaries identified by GPT-3 are closer to the consensus, on average, than boundaries identified by individual human annotators. This finding suggests that GPT-3 provides a feasible solution for automated event annotations, and it demonstrates a further parallel between human cognition and prediction in large language models. In the future, GPT-3 may thereby help to elucidate the principles underlying human event perception.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"39"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11810054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142920531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Visual Integration of Semantic and Spatial Information of Objects in Naturalistic Scenes (VISIONS) database: attentional, conceptual, and perceptual norms. 自然场景数据库中物体语义和空间信息的视觉整合:注意、概念和知觉规范。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02535-9
Elena Allegretti, Giorgia D'Innocenzo, Moreno I Coco
{"title":"The Visual Integration of Semantic and Spatial Information of Objects in Naturalistic Scenes (VISIONS) database: attentional, conceptual, and perceptual norms.","authors":"Elena Allegretti, Giorgia D'Innocenzo, Moreno I Coco","doi":"10.3758/s13428-024-02535-9","DOIUrl":"10.3758/s13428-024-02535-9","url":null,"abstract":"<p><p>The complex interplay between low- and high-level mechanisms governing our visual system can only be fully understood within ecologically valid naturalistic contexts. For this reason, in recent years, substantial efforts have been devoted to equipping the scientific community with datasets of realistic images normed on semantic or spatial features. Here, we introduce VISIONS, an extensive database of 1136 naturalistic scenes normed on a wide range of perceptual and conceptual norms by 185 English speakers across three levels of granularity: isolated object, whole scene, and object-in-scene. Each naturalistic scene contains a critical object systematically manipulated and normed regarding its semantic consistency (e.g., a toothbrush vs. a flashlight in a bathroom) and spatial position (i.e., left, right). Normative data are also available for low- (i.e., clarity, visual complexity) and high-level (i.e., name agreement, confidence, familiarity, prototypicality, manipulability) features of the critical object and its embedding scene context. Eye-tracking data during a free-viewing task further confirms the experimental validity of our manipulations while theoretically demonstrating that object semantics is acquired in extra-foveal vision and used to guide early overt attention. To our knowledge, VISIONS is the first database exhaustively covering norms about integrating objects in scenes and providing several perceptual and conceptual norms of the two as independently taken. We expect VISIONS to become an invaluable image dataset to examine and answer timely questions above and beyond vision science, where a diversity of perceptual, attentive, mnemonic, or linguistic processes could be explored as they develop, age, or become neuropathological.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"42"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142926337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A normative database of Swahili-Chinese paired associates. 斯瓦希里语-汉语配对的规范数据库。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-03 DOI: 10.3758/s13428-024-02531-z
Tian Fan, Wenbo Zhao, Bukuan Sun, Shaohang Liu, Yue Yin, Muzi Xu, Xiao Hu, Chunliang Yang, Liang Luo
{"title":"A normative database of Swahili-Chinese paired associates.","authors":"Tian Fan, Wenbo Zhao, Bukuan Sun, Shaohang Liu, Yue Yin, Muzi Xu, Xiao Hu, Chunliang Yang, Liang Luo","doi":"10.3758/s13428-024-02531-z","DOIUrl":"10.3758/s13428-024-02531-z","url":null,"abstract":"<p><p>Over the past few decades, Swahili-English and Lithuanian-English word pair databases have been extensively utilized in research on learning and memory. However, these normative databases are specifically designed for generating study stimuli in learning and memory research involving native (or fluent) English speakers. Consequently, they are not suitable for investigations that encompass populations whose first language is not English, such as Chinese individuals. Notably, native Chinese speakers constitute a substantial proportion, approximately 18%, of the global population. The current study aims to establish a new database of translation equivalences, specifically tailored to facilitate research on learning, memory, and metacognition among the Chinese population. We present a comprehensive set of normative measures for 200 Swahili-Chinese paired associates, including recall accuracy, recall latency, error patterns, confidence ratings, perceived learning difficulty, judgments of learning, and perceived learning interestingness for the entire word pairs. Additionally, we include word-likeness ratings and word length for the Swahili words, and concreteness ratings, familiarity ratings, word frequency, and number of strokes for the Chinese words. This diverse array of measures, gathered across a substantial number of Swahili-Chinese word pairs, is poised to effectively support future research seeking to investigate the intricate processes of learning, memory and metacognition within the Chinese population.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"40"},"PeriodicalIF":4.6,"publicationDate":"2025-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142920528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A multiverse assessment of the reliability of the self-matching task as a measurement of the self-prioritization effect. 自匹配任务的多重宇宙可靠性评估作为自优先级效应的测量。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-02 DOI: 10.3758/s13428-024-02538-6
Zheng Liu, Mengzhen Hu, Yuanrui Zheng, Jie Sui, Hu Chuan-Peng
{"title":"A multiverse assessment of the reliability of the self-matching task as a measurement of the self-prioritization effect.","authors":"Zheng Liu, Mengzhen Hu, Yuanrui Zheng, Jie Sui, Hu Chuan-Peng","doi":"10.3758/s13428-024-02538-6","DOIUrl":"10.3758/s13428-024-02538-6","url":null,"abstract":"<p><p>The self-matching task (SMT) is widely used to investigate the cognitive mechanisms underlying the self-prioritization effect (SPE), wherein performance is enhanced for self-associated stimuli compared to other-associated ones. Although the SMT robustly elicits the SPE, there is a lack of data quantifying the reliability of this paradigm. This is problematic, given the prevalence of the reliability paradox in cognitive tasks: many well-established cognitive tasks demonstrate relatively low reliability when used to evaluate individual differences, despite exhibiting replicable effects at the group level. To fill this gap, this preregistered study investigated the reliability of SPE derived from the SMT using a multiverse approach, combining all possible indicators and baselines reported in the literature. We first examined the robustness of 24 SPE measures across 42 datasets (N = 2250) using a meta-analytical approach. We then calculated the split-half reliability (r) and intraclass correlation coefficient (ICC2) for each SPE measure. Our findings revealed a robust group-level SPE across datasets. However, when evaluating individual differences, SPE indices derived from reaction time (RT) and efficiency exhibited relatively higher, compared to other SPE indices, but still unsatisfied split-half reliability (approximately 0.5). The reliability across multiple time points, as assessed by ICC2, RT, and efficiency, demonstrated moderate levels of test-retest reliability (close to 0.5). These findings revealed the presence of a reliability paradox in the context of SMT-based SPE assessment. We discussed the implications of how to enhance individual-level reliability using this paradigm for future study design.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"37"},"PeriodicalIF":4.6,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142920524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How accurately can we estimate spontaneous body kinematics from video recordings? Effect of movement amplitude on OpenPose accuracy. 我们从录像中估计人体自发运动的准确度有多高?运动幅度对OpenPose精度的影响。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2025-01-02 DOI: 10.3758/s13428-024-02546-6
Atesh Koul, Giacomo Novembre
{"title":"How accurately can we estimate spontaneous body kinematics from video recordings? Effect of movement amplitude on OpenPose accuracy.","authors":"Atesh Koul, Giacomo Novembre","doi":"10.3758/s13428-024-02546-6","DOIUrl":"10.3758/s13428-024-02546-6","url":null,"abstract":"<p><p>Estimating how the human body moves in space and time-body kinematics-has important applications for industry, healthcare, and several research fields. Gold-standard methodologies capturing body kinematics are expensive and impractical for naturalistic recordings as they rely on infrared-reflective wearables and bulky instrumentation. To overcome these limitations, several algorithms have been developed to extract body kinematics from plain video recordings. This comes with a drop in accuracy, which however has not been clearly quantified. To fill this knowledge gap, we analysed a dataset comprising 46 human participants exhibiting spontaneous movements of varying amplitude. Body kinematics were estimated using OpenPose (video-based) and Vicon (infrared-based) motion capture systems simultaneously. OpenPose accuracy was assessed using Vicon estimates as ground truth. We report that OpenPose accuracy is overall moderate and varies substantially across participants and body parts. This is explained by variability in movement amplitude. OpenPose estimates are weak for low-amplitude movements. Conversely, large-amplitude movements (i.e., > ~ 10 cm) yield highly accurate estimates. The relationship between accuracy and movement amplitude is not linear (but mostly exponential or power) and relatively robust to camera-body distance. Together, these results dissect the limits of video-based motion capture and provide useful guidelines for future studies.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"38"},"PeriodicalIF":4.6,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695451/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142920529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Measuring visual ability in linguistically diverse populations. 测量不同语言人群的视觉能力。
IF 4.6 2区 心理学
Behavior Research Methods Pub Date : 2024-12-30 DOI: 10.3758/s13428-024-02579-x
Madison A Hooper, Andrew Tomarken, Isabel Gauthier
{"title":"Measuring visual ability in linguistically diverse populations.","authors":"Madison A Hooper, Andrew Tomarken, Isabel Gauthier","doi":"10.3758/s13428-024-02579-x","DOIUrl":"10.3758/s13428-024-02579-x","url":null,"abstract":"<p><p>Measurement of object recognition (OR) ability could predict learning and success in real-world settings, and there is hope that it may reduce bias often observed in cognitive tests. Although the measurement of visual OR is not expected to be influenced by the language of participants or the language of instructions, these assumptions remain largely untested. Here, we address the challenges of measuring OR abilities across linguistically diverse populations. In Study 1, we find that English-Spanish bilinguals, when randomly assigned to the English or Spanish version of the novel object memory test (NOMT), exhibit a highly similar overall performance. Study 2 extends this by assessing psychometric equivalence using an approach grounded in item response theory (IRT). We examined whether groups fluent in English or Spanish differed in (a) latent OR ability as assessed by a three-parameter logistic IRT model, and (2) the mapping of observed item responses on the latent OR construct, as assessed by differential item functioning (DIF) analyses. Spanish speakers performed better than English speakers, a difference we suggest is due to motivational differences between groups of vastly different size on the Prolific platform. That we found no substantial DIF between the groups tested in English or Spanish on the NOMT indicates measurement invariance. The feasibility of increasing diversity by combining groups tested in different languages remains unexplored. Adopting this approach could enable visual scientists to enhance diversity, equity, and inclusion in their research, and potentially in the broader application of their work in society.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 1","pages":"36"},"PeriodicalIF":4.6,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11685244/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142909184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信