Bernhard Angele, Zeynep Gunes Ozkan, Marina Serrano-Carot, Jon Andoni Duñabeitia
{"title":"How low can you go? Tracking eye movements during reading at different sampling rates.","authors":"Bernhard Angele, Zeynep Gunes Ozkan, Marina Serrano-Carot, Jon Andoni Duñabeitia","doi":"10.3758/s13428-025-02713-3","DOIUrl":"10.3758/s13428-025-02713-3","url":null,"abstract":"<p><p>Eye-movement research has revolutionized our understanding of reading, but the use of eye-tracking techniques in investigating the reading process is still limited by the cost of high-precision eye-tracking, which limits research to laboratories with sufficient resources. It is important to evaluate to what extent cognitive processes during reading can be measured with less expensive eye-tracking devices. One such way may be to use devices with a lower sampling rate, which are much less expensive than high-sampling rate eye-trackers. We recorded readers' eye movements during reading at different sampling rates and show that it is possible to measure the classic effect of word frequency on fixation duration, reflecting ongoing cognitive processing during reading, at sampling rates ranging from 250 to 2000 Hz. We simulate even lower sampling rates and show that, with a sufficiently large sample size, it is possible to detect the effect of word frequency even at very low sampling rates (30-125 Hz). Our results demonstrate that, in principle, low sampling rates are not an obstacle to studying the effects of cognitive processing during reading.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"195"},"PeriodicalIF":4.6,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12149270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144257221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Norizadeh Cherloo, Homa Kashefi Amiri, Amir Mohammad Mijani, Liang Zhan, Mohammad Reza Daliri
{"title":"A comprehensive study of template-based frequency detection methods in SSVEP-based brain-computer interfaces.","authors":"Mohammad Norizadeh Cherloo, Homa Kashefi Amiri, Amir Mohammad Mijani, Liang Zhan, Mohammad Reza Daliri","doi":"10.3758/s13428-025-02710-6","DOIUrl":"https://doi.org/10.3758/s13428-025-02710-6","url":null,"abstract":"<p><p>Recently, SSVEP-based brain-computer interfaces (BCIs) have received increasing attention from researchers due to their high signal-to-noise ratios (SNR), high information transfer rates (ITR), and low user training. Therefore, various methods have been proposed to recognize the frequency of SSVEPs. This paper reviewed the state-of-the-art frequency detection methods in SSVEP-based BCIs. Nineteen multi-channel SSVEP detection methods, organized into four categories based on different analytical approaches, were studied. All methods are template-based approaches and classified into four groups according to the basic models they employ: canonical correlation analysis (CCA), multivariate synchronization index (MSI), task-related component analysis (TRCA), and correlated component analysis (CORRCA). Each group consists of methods that use one of these basic models as the core model for their approach. This paper provides a description, a clear flowchart, and MATLAB code for each method and helps researchers use or develop the existing SSVEP detection methods. Although all methods were evaluated in separate studies, a comprehensive comparison of methods is still missing. In this study, several experiments were conducted to assess the performance of SSVEP detection methods. The benchmark 40-class SSVEP dataset from 35 subjects was used to evaluate methods. All methods were applied to the dataset and were evaluated in terms of classification accuracy, information transfer rate (ITR), and computational time. The experiment results revealed that four factors efficiently design an accurate, robust SSVEP detection method. (1) employing filter bank analysis to incorporate fundamental and harmonics frequency components, (2) utilizing calibration data to construct optimized reference signals, (3) integrating spatial filters of all stimuli to construct classification features, and (4) calculating spatial filters using training trials. Furthermore, results showed that filter bank ensemble task-related components (FBETRCA) achieved the highest performance.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"196"},"PeriodicalIF":4.6,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144257220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Kalimah norms: Ratings for 2,467 modern standard Arabic words on two scales.","authors":"Alaa Alzahrani, Wafa Aljuaythin, Hassan Alshumrani, Alaa Mamoun Saleh, Mohamed M Mostafa","doi":"10.3758/s13428-025-02692-5","DOIUrl":"https://doi.org/10.3758/s13428-025-02692-5","url":null,"abstract":"<p><p>Crowdsourced normative ratings have benefited psycholinguistic research considerably. Yet, Modern Standard Arabic (MSA) has scarce word norms. To address this scarcity, the current study developed and validated the Kalimah norms in two experiments. In experiment 1, 803 native Arabic speakers provided age of acquisition (AOA) and concreteness (CNC) ratings for 2,467 MSA words, a subset of which was lexically ambiguous (N = 60). Correlations with 12 Arabic and international norms established the sufficient validity of the Kalimah norms. We also observed variations in the ratings for the distinct meanings of the same word form. In experiment 2, we show that our crowdsourced norms can be used to validate large language model (LLM)-generated norm ratings for MSA words. Based on this, we obtained LLM-generated CNC ratings for an additional set of 30,000 MSA words. We make both the Kalimah norms and LLM-generated CNC ratings freely available for research purposes.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"194"},"PeriodicalIF":4.6,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144257222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dana Pfefferle, Steven R Talbot, Pia Kahnau, Lauren C Cassidy, Ralf R Brockhausen, Anne Jaap, Veronika Deikun, Pinar Yurt, Alexander Gail, Stefan Treue, Lars Lewejohann
{"title":"Advancing preference testing in humans and animals.","authors":"Dana Pfefferle, Steven R Talbot, Pia Kahnau, Lauren C Cassidy, Ralf R Brockhausen, Anne Jaap, Veronika Deikun, Pinar Yurt, Alexander Gail, Stefan Treue, Lars Lewejohann","doi":"10.3758/s13428-025-02668-5","DOIUrl":"10.3758/s13428-025-02668-5","url":null,"abstract":"<p><p>Preference tests help to determine how highly individuals value different options to choose from. During preference testing, two or more options are presented simultaneously, and options are ranked based on the choices made. Presented options, however, influence each other, where the amount of influence increases with the number of options. Multiple binary choice tests can reduce this degree of influence, but conventional analysis methods do not reveal the relative strengths of preference, i.e., the preference difference between options. Here, we demonstrate that multiple binary comparisons can be used not only to rank but also to scale preferences among many options (i.e., their worth value). We analyzed human image preference data with known valence scores to develop and validate our approach to determine how known valence ranges (high vs. low) converge on a scaled representation of preference data. Our approach allowed us to assess the valence of ranked options in mice and rhesus macaques. By conducting simulations, we developed an approach to incorporate additional option choices into existing rank orders without the need to conduct binary choice tests with all original options, thus reducing the number of animal experiments needed. Two quality measures, consensus error and intransitivity ratio, allow for assessing the achieved confidence of the scaled ranking and better tailoring of measurements required to improve it further. The software is available as an R package (\"simsalRbim\"). Our approach optimizes preference testing, e.g., in welfare assessment, and allows us to efficiently and quantitatively assess the relative value of options presented to animals.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"193"},"PeriodicalIF":4.6,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12144046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144246211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lettuce entertain you: Assessing Sandwich Builder as a measure of auditory short-term memory.","authors":"Drew J McLaughlin, Arthur G Samuel","doi":"10.3758/s13428-025-02707-1","DOIUrl":"10.3758/s13428-025-02707-1","url":null,"abstract":"<p><p>Numerous language models propose a critical role of auditory short-term memory in spoken language processing, making reliable estimations of individual listeners' memory capacities essential in linguistic research. Although a variety of cognitive tasks exist for assessing both visual and auditory short-term capacities in a laboratory environment (e.g., digit span), these types of tasks are typically not engaging for the participant. In the current study, we assess the validity and reliability of a novel \"gamified\" short-term memory assessment tool called Sandwich Builder. In Sandwich Builder, the participant completes 12 trials of a sandwich-building game. In each trial, a drive-thru customer's sandwich order is presented auditorily, including a list of 1 to 10 sandwich ingredients (e.g., salami, cheddar cheese, mayo, and olives). The participant needs to remember these sandwich ingredients and then build the customer's sandwich by selecting the necessary ingredients in the correct order. We evaluate Sandwich Builder as an estimate of auditory short-term memory, finding significant correlations with other classic short-term memory measures. Additionally, we measure affect (i.e., mood), fatigue, and motivation following Sandwich Builder as compared to classic tasks: Participants leave Sandwich Builder in a better mood, less fatigued, and more motivated than they leave other memory tasks. Sandwich Builder shows strong convergent, discriminant, and predictive validity, and moderate-to-good test-retest reliability. Finally, we examine the practicality of Sandwich Builder as a measure of individual differences in speech perception research, providing expected power curves for two common paradigms. Altogether, we robustly demonstrate the utility of Sandwich Builder, which is openly available online.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"191"},"PeriodicalIF":4.6,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137375/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144224177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TittaLSL: A toolbox for creating networked eye-tracking experiments in Python and MATLAB with Tobii eye trackers.","authors":"Diederick C Niehorster, Marcus Nyström","doi":"10.3758/s13428-025-02714-2","DOIUrl":"10.3758/s13428-025-02714-2","url":null,"abstract":"<p><p>Studying the behavior of multiple participants using networked eye-tracking setups is of increasing interest to researchers. However, to conduct such studies, researchers have had to create complicated ad hoc solutions for streaming gaze over a local network. Here we present TittaLSL, a toolbox that enables creating networked multi-participant experiments using Tobii eye trackers with minimal programming effort. An evaluation using 600-Hz gaze streams sent between 15 different eye-tracking stations revealed that the end-to-end latency, including the eye tracker's gaze estimation processes, achieved by TittaLSL was 3.05 ms. This was only 0.10 ms longer than when gaze samples were received from a locally connected eye tracker. We think that these latencies are low enough that TittaLSL is suitable for the majority of networked eye-tracking experiments, even when the gaze needs to be shown in real time.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"190"},"PeriodicalIF":4.6,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137467/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144224178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Creating a social virtual reality application for psychological research: A tutorial.","authors":"Marius Rubo","doi":"10.3758/s13428-025-02693-4","DOIUrl":"10.3758/s13428-025-02693-4","url":null,"abstract":"<p><p>Social virtual reality (VR) setups allow two or more individuals to interact in a shared virtual environment while embodying computerized avatars. Such setups allow for detailed investigations into social-cognitive processes, can extend the functionality of existing single-user VR applications and can be used to design novel educational settings and psychotherapeutic treatments. While researchers may use commercially available social VR applications in addressing a range of research questions, an in-house software solution is typically developed when a research project requires more flexible experimental control or should conform to the highest data security standards. This tutorial demonstrates the construction of a social VR application based on an example software that mimics a VR setup but can be run and explored on an individual computer. Information flow emphasizes transparency to allow researchers to flexibly adapt the software to their own environment and research demands. The software is designed for use in controlled laboratory environments but can be extended for use in field research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"192"},"PeriodicalIF":4.6,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12137431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144224176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raphael Hartmann, Anton Koger, Elisa R Straub, Leif Johannsen, Iring Koch, Denise N Stephan, Hermann Müller, Andrea Kiesel
{"title":"forceplate: An R package for processing raw force-plate time-series data.","authors":"Raphael Hartmann, Anton Koger, Elisa R Straub, Leif Johannsen, Iring Koch, Denise N Stephan, Hermann Müller, Andrea Kiesel","doi":"10.3758/s13428-025-02657-8","DOIUrl":"10.3758/s13428-025-02657-8","url":null,"abstract":"<p><p>Evidence supporting the interaction between cognitive and motor processes is increasing. Conventional approaches to analyze balance control aggregate sway data over seconds up to minutes, which presents a challenge in discerning the impact of single cognitive processes on balance control. In this paper, we propose a novel, event-related approach to investigate how cognitive task performance affects balance control on small time scales using a force plate. A force plate continuously measures forces and moments in each spatial dimension over time. To facilitate the processing of the resulting time-series data, we developed an R-package called forceplate. This package segments the data so that each trial, corresponding to a cognitive task, has its own time-series data. A low-pass filter can be applied to remove artifacts (e.g., muscle twitches or electrical noise), and a baseline correction can be performed to improve the comparability of trials. For each trial's time-series data, user-defined descriptive statistics (e.g., mean or standard deviation) can be calculated for user-defined time bins around an event (e.g., stimulus or response onset). The package generates a dataset with one or more measures per trial (depending on the number of time bins) that can be used for further analysis, such as a (mixed-effects) analysis of variance. The R-package and the described underlying procedure aim to establish a standard to process force-plate data collected in the context of cognitive experiments for the event-related approach. This facilitates the processing of force-plate data and enhances comparability between studies.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"187"},"PeriodicalIF":4.6,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12133998/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144214785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diederick C Niehorster, Roy S Hessels, Marcus Nyström, Jeroen S Benjamins, Ignace T C Hooge
{"title":"gazeMapper: A tool for automated world-based analysis of gaze data from one or multiple wearable eye trackers.","authors":"Diederick C Niehorster, Roy S Hessels, Marcus Nyström, Jeroen S Benjamins, Ignace T C Hooge","doi":"10.3758/s13428-025-02704-4","DOIUrl":"10.3758/s13428-025-02704-4","url":null,"abstract":"<p><strong>The problem: </strong>wearable eye trackers deliver eye-tracking data on a scene video that is acquired by a camera affixed to the participant's head. Analyzing and interpreting such head-centered data is difficult and laborious manual work. Automated methods to map eye-tracking data to a world-centered reference frame (e.g., screens and tabletops) are available. These methods usually make use of fiducial markers. However, such mapping methods may be difficult to implement, expensive, and eye tracker-specific.</p><p><strong>The solution: </strong>here we present gazeMapper, an open-source tool for automated mapping and processing of eye-tracking data. gazeMapper can: (1) Transform head-centered data to planes in the world, (2) synchronize recordings from multiple participants, (3) determine data quality measures, e.g., accuracy and precision. gazeMapper comes with a GUI application (Windows, macOS, and Linux) and supports 11 different wearable eye trackers from AdHawk, Meta, Pupil, SeeTrue, SMI, Tobii, and Viewpointsystem. It is also possible to sidestep the GUI and use gazeMapper as a Python library directly.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"188"},"PeriodicalIF":4.6,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12134025/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144214786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bence C Farkas, Attila Krajcsi, Karolina Janacsek, Dezso Nemeth
{"title":"Correction: The complexity of measuring reliability in learning tasks: An illustration using the Alternating Serial Reaction Time Task.","authors":"Bence C Farkas, Attila Krajcsi, Karolina Janacsek, Dezso Nemeth","doi":"10.3758/s13428-025-02670-x","DOIUrl":"10.3758/s13428-025-02670-x","url":null,"abstract":"","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 7","pages":"185"},"PeriodicalIF":4.6,"publicationDate":"2025-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12134027/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144207500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}