{"title":"Geofencing in location-based behavioral research: Methodology, challenges, and implementation.","authors":"Yury Shevchenko, Ulf-Dietrich Reips","doi":"10.3758/s13428-023-02213-2","DOIUrl":"10.3758/s13428-023-02213-2","url":null,"abstract":"<p><p>This manuscript presents a novel geofencing method in behavioral research. Geofencing, built upon geolocation technology, constitutes virtual fences around specific locations. Every time a participant crosses the virtual border around the geofenced area, an event can be triggered on a smartphone, e.g., the participant may be asked to complete a survey. The geofencing method can alleviate the problems of constant location tracking, such as recording sensitive geolocation information and battery drain. In scenarios where locations for geofencing are determined by participants (e.g., home, workplace), no location data need to be transferred to the researcher, so this method can ensure privacy and anonymity. Given the widespread use of smartphones and mobile Internet, geofencing has become a feasible tool in studying human behavior and cognition outside of the laboratory. The method can help advance theoretical and applied psychological science at a new frontier of context-aware research. At the same time, there is a lack of guidance on how and when geofencing can be applied in research. This manuscript aims to fill the gap and ease the adoption of the geofencing method. We describe the current challenges and implementations in geofencing and present three empirical studies in which we evaluated the geofencing method using the Samply application, a tool for mobile experience sampling research. The studies show that sensitivity and precision of geofencing were affected by the type of event, location radius, environment, operating system, and user behavior. Potential implications and recommendations for behavioral research are discussed.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"6411-6439"},"PeriodicalIF":4.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362315/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10428016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jarrod Blinch, Coby Trovinger, Callie R DeWinne, Guilherme de Cellio Martins, Chelsea N Ifediora, Maryam Nourollahimoghadam, John R Harry, Ty B Palmer
{"title":"Tradeoffs of estimating reaction time with absolute and relative thresholds.","authors":"Jarrod Blinch, Coby Trovinger, Callie R DeWinne, Guilherme de Cellio Martins, Chelsea N Ifediora, Maryam Nourollahimoghadam, John R Harry, Ty B Palmer","doi":"10.3758/s13428-023-02211-4","DOIUrl":"10.3758/s13428-023-02211-4","url":null,"abstract":"<p><p>Measuring the duration of cognitive processing with reaction time is fundamental to several subfields of psychology. Many methods exist for estimating movement initiation when measuring reaction time, but there is an incomplete understanding of their relative performance. The purpose of the present study was to identify and compare the tradeoffs of 19 estimates of movement initiation across two experiments. We focused our investigation on estimating movement initiation on each trial with filtered kinematic and kinetic data. Nine of the estimates involved absolute thresholds (e.g., acceleration 1000 back to 200 mm/s<sup>2</sup>, micro push-button switch), and the remaining ten estimates used relative thresholds (e.g., force extrapolation, 5% of maximum velocity). The criteria were the duration of reaction time, immunity to the movement amplitude, responsiveness to visual feedback during movement execution, reliability, and the number of manually corrected trials (efficacy). The three best overall estimates, in descending order, were yank extrapolation, force extrapolation, and acceleration 1000 to 200 mm/s<sup>2</sup>. The sensitive micro push-button switch, which was the simplest estimate, had a decent overall score, but it was a late estimate of movement initiation. The relative thresholds based on kinematics had the six worst overall scores. An issue with the relative kinematic thresholds was that they were biased by the movement amplitude. In summary, we recommend measuring reaction time on each trial with one of the three best overall estimates of movement initiation. Future research should continue to refine existing estimates while also exploring new ones.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4695-4715"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10128403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Soon Tat Lee, Walter J B van Heuven, Jessica M Price, Christine Xiang Ru Leong
{"title":"LexMAL: A quick and reliable lexical test for Malay speakers.","authors":"Soon Tat Lee, Walter J B van Heuven, Jessica M Price, Christine Xiang Ru Leong","doi":"10.3758/s13428-023-02202-5","DOIUrl":"10.3758/s13428-023-02202-5","url":null,"abstract":"<p><p>Objective language proficiency measures have been found to provide better and more consistent estimates of bilinguals' language processing than self-rated proficiency (e.g., Tomoschuk et al., 2019; Wen & van Heuven, 2017a). However, objectively measuring language proficiency is often not possible because of a lack of quick and freely available language proficiency tests (Park et al., 2022). Therefore, quick valid vocabulary tests, such as LexTALE (Lemhöfer & Broersma, 2012) and its extensions (e.g., LexITA: Amenta et al., 2020; LEXTALE-FR: Brysbaert, 2013; LexPT: Zhou & Li, 2022) have been developed to reliably assess language proficiency of speakers of various languages. The present study introduces a Lexical Test for Malay Speakers (LexMAL), which estimates language proficiency for Malay first language (L1) and second language (L2) speakers. An initial 180-item LexMAL prototype was evaluated using 60 Malay L1 and 60 L2 speakers in Experiment 1. Sixty words and 30 nonwords with the highest discriminative power that span across the full difficulty range were selected for the final LexMAL based on point-biserial correlations and an item response theory analysis. The validity of LexMAL was demonstrated through a reliable discrimination between L1 and L2 speakers, significant correlations between LexMAL scores and performance on other Malay language tasks (i.e., translation accuracy and cloze test scores), and LexMAL outperforming self-rated proficiency. A validation study (Experiment 2) with the 90-item final LexMAL tested with a different group of Malay L1 (N = 61) and L2 speakers (N = 61) replicated the findings of Experiment 1. LexMAL is freely available for researchers at www.lexmal.org .</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4563-4581"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289131/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10135765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessica V Schaaf, Laura Weidinger, Lucas Molleman, Wouter van den Bos
{"title":"Test-retest reliability of reinforcement learning parameters.","authors":"Jessica V Schaaf, Laura Weidinger, Lucas Molleman, Wouter van den Bos","doi":"10.3758/s13428-023-02203-4","DOIUrl":"10.3758/s13428-023-02203-4","url":null,"abstract":"<p><p>It has recently been suggested that parameter estimates of computational models can be used to understand individual differences at the process level. One area of research in which this approach, called computational phenotyping, has taken hold is computational psychiatry. One requirement for successful computational phenotyping is that behavior and parameters are stable over time. Surprisingly, the test-retest reliability of behavior and model parameters remains unknown for most experimental tasks and models. The present study seeks to close this gap by investigating the test-retest reliability of canonical reinforcement learning models in the context of two often-used learning paradigms: a two-armed bandit and a reversal learning task. We tested independent cohorts for the two tasks (N = 69 and N = 47) via an online testing platform with a between-test interval of five weeks. Whereas reliability was high for personality and cognitive measures (with ICCs ranging from .67 to .93), it was generally poor for the parameter estimates of the reinforcement learning models (with ICCs ranging from .02 to .52 for the bandit task and from .01 to .71 for the reversal learning task). Given that simulations indicated that our procedures could detect high test-retest reliability, this suggests that a significant proportion of the variability must be ascribed to the participants themselves. In support of that hypothesis, we show that mood (stress and happiness) can partly explain within-participant variability. Taken together, these results are critical for current practices in computational phenotyping and suggest that individual variability should be taken into account in the future development of the field.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4582-4599"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289054/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10540135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kaja Glomb, Przemysław Piotrowski, Izabela Anna Romanowska
{"title":"It is not real until it feels real: Testing a new method for simulation of eyewitness experience with virtual reality technology and equipment.","authors":"Kaja Glomb, Przemysław Piotrowski, Izabela Anna Romanowska","doi":"10.3758/s13428-023-02186-2","DOIUrl":"10.3758/s13428-023-02186-2","url":null,"abstract":"<p><p>Laboratory research in the psychology of witness testimony is often criticized for its lack of ecological validity, including the use of unrealistic artificial stimuli to test memory performance. The purpose of our study is to present a method that can provide an intermediary between laboratory research and field studies or naturalistic experiments that are difficult to control and administer. It uses Video-360° technology and virtual reality (VR) equipment, which cuts subjects off from external stimuli and gives them control over the visual field. This can potentially increase the realism of the eyewitness's experience. To test the method, we conducted an experiment comparing the immersion effect, emotional response, and memory performance between subjects who watched a video presenting a mock crime on a head-mounted display (VR goggles; n = 57) and a screen (n = 50). The results suggest that, compared to those who watched the video on a screen, the VR group had a deeper sense of immersion, that is, of being part of the scene presented. At the same time, they were not distracted or cognitively overloaded by the more complex virtual environment, and remembered just as much detail about the crime as those viewing it on the screen. Additionally, we noted significant differences between subjects in ratings of emotions felt during the video. This may suggest that the two formats evoke different types of discrete emotions. Overall, the results confirm the usefulness of the proposed method in witness research.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4336-4350"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9942586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluating the Tobii Pro Glasses 2 and 3 in static and dynamic conditions.","authors":"V Onkhar, D Dodou, J C F de Winter","doi":"10.3758/s13428-023-02173-7","DOIUrl":"10.3758/s13428-023-02173-7","url":null,"abstract":"<p><p>Over the past few decades, there have been significant developments in eye-tracking technology, particularly in the domain of mobile, head-mounted devices. Nevertheless, questions remain regarding the accuracy of these eye-trackers during static and dynamic tasks. In light of this, we evaluated the performance of two widely used devices: Tobii Pro Glasses 2 and Tobii Pro Glasses 3. A total of 36 participants engaged in tasks under three dynamicity conditions. In the \"seated with a chinrest\" trial, only the eyes could be moved; in the \"seated without a chinrest\" trial, both the head and the eyes were free to move; and during the walking trial, participants walked along a straight path. During the seated trials, participants' gaze was directed towards dots on a wall by means of audio instructions, whereas in the walking trial, participants maintained their gaze on a bullseye while walking towards it. Eye-tracker accuracy was determined using computer vision techniques to identify the target within the scene camera image. The findings showed that Tobii 3 outperformed Tobii 2 in terms of accuracy during the walking trials. Moreover, the results suggest that employing a chinrest in the case of head-mounted eye-trackers is counterproductive, as it necessitates larger eye eccentricities for target fixation, thereby compromising accuracy compared to not using a chinrest, which allows for head movement. Lastly, it was found that participants who reported higher workload demonstrated poorer eye-tracking accuracy. The current findings may be useful in the design of experiments that involve head-mounted eye-trackers.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4221-4238"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9951628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Krason, Ye Zhang, Hillarie Man, Gabriella Vigliocco
{"title":"Mouth and facial informativeness norms for 2276 English words.","authors":"Anna Krason, Ye Zhang, Hillarie Man, Gabriella Vigliocco","doi":"10.3758/s13428-023-02216-z","DOIUrl":"10.3758/s13428-023-02216-z","url":null,"abstract":"<p><p>Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors). Here, we present mouth and facial informativeness (MaFI) for words, i.e., how visually informative words are based on their corresponding mouth and facial movements. MaFI was quantified for 2276 English words, varying in length, frequency, and age of acquisition, using phonological distance between a word and participants' speechreading guesses. The results showed that MaFI norms capture well the dynamic nature of mouth and facial movements per word, with words containing phonemes with roundness and frontness features, as well as visemes characterized by lower lip tuck, lip rounding, and lip closure being visually more informative. We also showed that the more of these features there are in a word, the more informative it is based on mouth and facial movements. Finally, we demonstrated that the MaFI norms generalize across different variants of English language. The norms are freely accessible via Open Science Framework ( https://osf.io/mna8j/ ) and can benefit any language researcher using audiovisual stimuli (e.g., to control for the effect of speech-linked mouth and facial movements).</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4786-4801"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11289175/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10042051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ESMira: A decentralized open-source application for collecting experience sampling data.","authors":"David Lewetz, Stefan Stieger","doi":"10.3758/s13428-023-02194-2","DOIUrl":"10.3758/s13428-023-02194-2","url":null,"abstract":"<p><p>This paper introduces ESMira, a server and mobile app (Android, iOS) developed for research projects using experience sampling method (ESM) designs. ESMira offers a very simple setup process and ease of use, while being free, decentralized, and open-source (source code is available on GitHub). The ongoing development of ESMira started in early 2019, with a focus on scientific requirements (e.g., informed consent, ethical considerations), data security (e.g., encryption), and data anonymity (e.g., completely anonymous data workflow). ESMira sets itself apart from other platforms by both being free of charge and providing study administrators with full control over study data without the need for specific technological skills (e.g., programming). This means that study administrators can have ESMira running on their own webspace without needing much technical knowledge, allowing them to remain independent from any third-party service. Furthermore, ESMira offers an extensive list of features (e.g., an anonymous built-in chat to contact participants; a reward system that allows participant incentivization without breaching anonymity; live graphical feedback for participants) and can deal with complex study designs (e.g., nested time-based sampling). In this paper, we illustrate the basic structure of ESMira, explain how to set up a new server and create studies, and introduce the platform's basic functionalities.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4421-4434"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11288990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10042052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matt J Dunn, Robert G Alexander, Onyekachukwu M Amiebenomo, Gemma Arblaster, Denize Atan, Jonathan T Erichsen, Ulrich Ettinger, Mario E Giardini, Iain D Gilchrist, Ruth Hamilton, Roy S Hessels, Scott Hodgins, Ignace T C Hooge, Brooke S Jackson, Helena Lee, Stephen L Macknik, Susana Martinez-Conde, Lee Mcilreavy, Lisa M Muratori, Diederick C Niehorster, Marcus Nyström, Jorge Otero-Millan, Michael M Schlüssel, Jay E Self, Tarkeshwar Singh, Nikolaos Smyrnis, Andreas Sprenger
{"title":"Minimal reporting guideline for research involving eye tracking (2023 edition).","authors":"Matt J Dunn, Robert G Alexander, Onyekachukwu M Amiebenomo, Gemma Arblaster, Denize Atan, Jonathan T Erichsen, Ulrich Ettinger, Mario E Giardini, Iain D Gilchrist, Ruth Hamilton, Roy S Hessels, Scott Hodgins, Ignace T C Hooge, Brooke S Jackson, Helena Lee, Stephen L Macknik, Susana Martinez-Conde, Lee Mcilreavy, Lisa M Muratori, Diederick C Niehorster, Marcus Nyström, Jorge Otero-Millan, Michael M Schlüssel, Jay E Self, Tarkeshwar Singh, Nikolaos Smyrnis, Andreas Sprenger","doi":"10.3758/s13428-023-02187-1","DOIUrl":"10.3758/s13428-023-02187-1","url":null,"abstract":"<p><p>A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4351-4357"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11225961/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10246790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra F Ortmann, Michael T Bixter, Christian C Luhmann
{"title":"Great minds think alike: New measures to quantify the similarity of recalls.","authors":"Alexandra F Ortmann, Michael T Bixter, Christian C Luhmann","doi":"10.3758/s13428-023-02174-6","DOIUrl":"10.3758/s13428-023-02174-6","url":null,"abstract":"<p><p>Given the recent interest in how memory operates in social contexts, it is more important than ever to meaningfully measure the similarity between recall sequences of different individuals. Similarity of recall sequences of different individuals has been quantified using primarily order-agnostic and some order-sensitive measures specific to memory research without agreement on any one preferred measure. However, edit distance measures have not been used to quantify the similarity of recall sequences in collaborative memory studies. In the current study, we review a broad range of similarity measures, highlighting commonalities and differences. Using simulations and behavioral data, we show that edit distances do measure a memory-relevant factor of similarity and capture information distinct from that captured by order-agnostic measures. We answer illustrative research questions which demonstrate potential applications of edit distances in collaborative and individual memory settings and reveal the unique impact collaboration has on similarity.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":" ","pages":"4239-4254"},"PeriodicalIF":4.6,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10277673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}