Xiaolin Mei, Shuyuan Chen, Xinyi Xia, Bo Yang, Yanping Liu
{"title":"Neural correlates for word-frequency effect in Chinese natural reading.","authors":"Xiaolin Mei, Shuyuan Chen, Xinyi Xia, Bo Yang, Yanping Liu","doi":"10.3758/s13414-024-02894-7","DOIUrl":"https://doi.org/10.3758/s13414-024-02894-7","url":null,"abstract":"<p><p>Word frequency effect has always been of interest for reading research because of its critical role in exploring mental processing underlying reading behaviors. Access to word frequency information has long been considered an indicator of the beginning of lexical processing and the most sensitive marker for studying when the brain begins to extract semantic information Sereno & Rayner, Brain and Cognition, 42, 78-81, (2000), Trends in Cognitive Sciences, 7, 489-493, (2003). While the word frequency effect has been extensively studied in numerous eye-tracking and traditional EEG research using the RSVP paradigm, there is a lack of corresponding evidence in studies of natural reading. To find the neural correlates of the word frequency effect, we conducted a study of Chinese natural reading using EEG and eye-tracking coregistration to examine the time course of lexical processing. Our results reliably showed that the word frequency effect first appeared in the N200 time window and the bilateral occipitotemporal regions. Additionally, the word frequency effect was reflected in the N400 time window, spreading from the occipital region to the central parietal and frontal regions. Our current study provides the first neural correlates for word-frequency effect in natural Chinese reading so far, shedding new light on understanding lexical processing in natural reading and could serve as an important basis for further reading study when considering neural correlates in a realistic manner.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Invariant contexts reduce response time variability in visual search in an age-specific way: A comparison of children, teenagers, and adults","authors":"Chengyu Fan, Artyom Zinchenko, Lihan Chen, Jiao Wu, Yeke Qian, Xuelian Zang","doi":"10.3758/s13414-024-02926-2","DOIUrl":"10.3758/s13414-024-02926-2","url":null,"abstract":"<div><p>Contextual cueing is a phenomenon in which repeatedly encountered arrays of items can enhance the visual search for a target item. This is widely attributed to attentional guidance driven by contextual memory acquired during visual search. Some studies suggest that children may have an immature ability to use contextual cues compared to adults, while others argue that contextual learning capacity is similar across ages. To test the development of context-guided attention, this study compared contextual cueing effects among three age groups: adults (aged 18–33 years, <i>N</i> = 32), teenagers (aged 15–17 years, <i>N</i> = 41), and younger children (aged 8–9 years, <i>N</i> = 43). Moreover, this study introduced a measure of response time variability that tracks fluctuations in response time throughout the experiment, in addition to the conventional analysis of response times. The results showed that all age groups demonstrated significantly faster responses in repeated than non-repeated search contexts. Notably, adults and teenagers exhibited smaller response time variability in repeated contexts than in non-repeated ones, while younger children did not. This implies that children are less efficient at consolidating contextual information into a stable memory representation, which may lead to less stable attentional guidance during visual search.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Other ethnicity effects in ensemble coding of facial expressions","authors":"Zhenhua Zhao, Kelun Yaoma, Yujie Wu, Edwin Burns, Mengdan Sun, Haojiang Ying","doi":"10.3758/s13414-024-02920-8","DOIUrl":"10.3758/s13414-024-02920-8","url":null,"abstract":"<div><p>Cultural difference in ensemble emotion perception is an important research question, providing insights into the complexity of human cognition and social interaction. Here, we conducted two experiments to investigate how emotion perception would be affected by other ethnicity effects and ensemble coding. In Experiment 1, two groups of Asian and Caucasian participants were tasked with assessing the average emotion of faces from their ethnic group, other ethnic group, and mixed ethnicity groups. Results revealed that participants exhibited relatively accurate yet amplified emotion perception of their group faces, with a tendency to overestimate the weight of the faces from the other ethnic group. In Experiment 2, Asian participants were instructed to discern the emotion of a target face surrounded by faces from Caucasian and Asian faces. Results corroborated earlier findings, indicating that while participants accurately perceived emotions in faces of their ethnicity, their perception of Caucasian faces was noticeably influenced by the presence of surrounding Asian faces. These findings collectively support the notion that the <i>other ethnicity effect</i> stems from differential emotional amplification inherent in ensemble coding of emotion perception.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicola J. Morton, Matt Grice, Simon Kemp, Randolph C. Grace
{"title":"Non-symbolic estimation of big and small ratios with accurate and noisy feedback","authors":"Nicola J. Morton, Matt Grice, Simon Kemp, Randolph C. Grace","doi":"10.3758/s13414-024-02914-6","DOIUrl":"10.3758/s13414-024-02914-6","url":null,"abstract":"<div><p>The ratio of two magnitudes can take one of two values depending on the order they are operated on: a ‘big’ ratio of the larger to smaller magnitude, or a ‘small’ ratio of the smaller to larger. Although big and small ratio scales have different metric properties and carry divergent predictions for perceptual comparison tasks, no psychophysical studies have directly compared them. Two experiments are reported in which subjects implicitly learned to compare pairs of brightnesses and line lengths by non-symbolic feedback based on the scaled big ratio, small ratio or difference of the magnitudes presented. Results of Experiment 1 showed all three operations were learned quickly and estimated with a high degree of accuracy that did not significantly differ across groups or between intensive and extensive modalities, though regressions on individual data suggested an overall predisposition towards differences. Experiment 2 tested whether subjects learned to estimate the operation trained or to associate stimulus pairs with correct responses. For each operation, Gaussian noise was added to the feedback that was constant for repetitions of each pair. For all subjects, coefficients for the added noise component were negative when entered in a regression model alongside the trained differences or ratios, and were statistically significant in 80% of individual cases. Thus, subjects learned to estimate the comparative operations and effectively ignored or suppressed the added noise. These results suggest the perceptual system is highly flexible in its capacity for non-symbolic computation, which may reflect a deeper connection between perceptual structure and mathematics.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zi-Xi Luo, Wang-Nan Pan, Xiang-Jun Zeng, Liang-Yu Gong, Yong-Chun Cai
{"title":"Endogenous attention enhances contrast appearance regardless of stimulus contrast","authors":"Zi-Xi Luo, Wang-Nan Pan, Xiang-Jun Zeng, Liang-Yu Gong, Yong-Chun Cai","doi":"10.3758/s13414-024-02929-z","DOIUrl":"10.3758/s13414-024-02929-z","url":null,"abstract":"<div><p>There has been enduring debate on how attention alters contrast appearance. Recent research indicates that exogenous attention enhances contrast appearance for low-contrast stimuli but attenuates it for high-contrast stimuli. Similarly, one study has demonstrated that endogenous attention heightens perceived contrast for low-contrast stimuli, yet none have explored its impact on high-contrast stimuli. In this study, we investigated how endogenous attention alters contrast appearance, with a specific focus on high-contrast stimuli. In Experiment 1, we utilized the rapid serial visual presentation (RSVP) paradigm to direct endogenous attention, revealing that contrast appearance was enhanced for both low- and high-contrast stimuli. To eliminate potential influences from the confined attention field in the RSVP paradigm, Experiment 2 adopted the letter identification paradigm, deploying attention across a broader visual field. Results consistently indicated that endogenous attention increased perceived contrast for high-contrast stimuli. Experiment 3 employed equiluminant chromatic letters as stimuli in the letter identification task to eliminate potential interference from contrast adaption, which might have occurred in Experiment 2. Remarkably, the boosting effect of endogenous attention persisted. Combining the results from these experiments, we propose that endogenous attention consistently enhances contrast appearance, irrespective of stimulus contrast levels. This stands in contrast to the effects of exogenous attention, suggesting that mechanisms through which endogenous attention alters contrast appearance may differ from those of exogenous attention.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141592151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ATLAS: Mapping ATtention’s Location And Size to probe five modes of serial and parallel search","authors":"Gregory Davis","doi":"10.3758/s13414-024-02921-7","DOIUrl":"10.3758/s13414-024-02921-7","url":null,"abstract":"<div><p>Conventional visual search tasks do not address attention directly and their core manipulation of ‘set size’ – the number of displayed items – introduces stimulus confounds that hinder interpretation. However, alternative approaches have not been widely adopted, perhaps reflecting their complexity, assumptions, or indirect attention-sampling. Here, a new procedure, the ATtention Location And Size (‘ATLAS’) task used probe displays to track attention’s location, breadth, and guidance during search. Though most probe displays comprised six items, participants reported only the single item they judged themselves to have perceived most clearly – indexing the attention ‘peak’. By sampling peaks across variable ‘choice sets’, the size and position of the attention window during search was profiled. These indices appeared to distinguish narrow- from broad attention, signalled attention to pairs of items where it arose and tracked evolving attention-guidance over time. ATLAS is designed to discriminate five key search modes: serial-unguided, sequential-guided, unguided attention to ‘clumps’ with local guidance, and broad parallel-attention with or without guidance. This initial investigation used only an example set of highly regular stimuli, but its broader potential should be investigated.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410986/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Distractor similarity and category variability effects in search","authors":"Arryn Robbins, Anatolii Evdokimov","doi":"10.3758/s13414-024-02924-4","DOIUrl":"10.3758/s13414-024-02924-4","url":null,"abstract":"<div><p>Categorical search involves looking for objects based on category information from long-term memory. Previous research has shown that search efficiency in categorical search is influenced by target/distractor similarity and category variability (i.e., heterogeneity). However, the interaction between these factors and their impact on different subprocesses of search remains unclear. This study examined the effects of target/distractor similarity and category variability on processes of categorical search. Using multidimensional scaling, we manipulated target/distractor similarity and measured category variability for target categories that participants searched for. Eye-tracking data were collected to examine attentional guidance and target verification. The results demonstrated that the effect of category variability on response times (RTs) was dependent on the level of target/distractor similarity. Specifically, when distractors were highly similar to target categories, there was a negative relation between RTs and variability, with low variability categories producing longer RTs than higher variability categories. Surprisingly, this trend was only present in the eye-tracking measures of target verification but not attentional guidance. Our results suggest that searchers more effectively guide attention to low-variability categories compared to high-variability categories, regardless of the degree of similarity between targets and distractors. However, low category variability interferes with target match decisions when distractors are highly similar to the category, thus the advantage that low category variability provides to searchers is not equal across processes of search.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02924-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark W Becker, Andrew Rodriguez, Jeffrey Bolkhovsky, Chad Peltier, Sylvia B Guillory
{"title":"Activation thresholds, not quitting thresholds, account for the low prevalence effect in dynamic search.","authors":"Mark W Becker, Andrew Rodriguez, Jeffrey Bolkhovsky, Chad Peltier, Sylvia B Guillory","doi":"10.3758/s13414-024-02919-1","DOIUrl":"https://doi.org/10.3758/s13414-024-02919-1","url":null,"abstract":"<p><p>The low-prevalence effect (LPE) is the finding that target detection rates decline as targets become less frequent in a visual search task. A major source of this effect is thought to be that fewer targets result in lower quitting thresholds, i.e., observers respond target-absent after looking at fewer items compared to searches with a higher prevalence of targets. However, a lower quitting threshold does not directly account for an LPE in searches where observers continuously monitor a dynamic display for targets. In these tasks there are no discrete \"trials\" to which a quitting threshold could be applied. This study examines whether the LPE persists in this type of dynamic search context. Experiment 1 was a 2 (dynamic/static) x 2 (10%/40% prevalence targets) design. Although overall performance was worse in the dynamic task, both tasks showed a similar magnitude LPE. In Experiment 2, we replicated this effect using a task where subjects searched for either of two targets (Ts and Ls). One target appeared infrequently (10%) and the other moderately (40%). Given this method of manipulating prevalence rate, the quitting threshold explanation does not account for the LPE even for static displays. However, replicating Experiment 1, we found an LPE of similar magnitude for both search scenarios, and lower target detection rates with the dynamic displays, demonstrating the LPE is a potential concern for both static and dynamic searches. These findings suggest an activation threshold explanation of the LPE may better account for our observations than the traditional quitting threshold model.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Debora Nolte, Marc Vidal De Palol, Ashima Keshava, John Madrid-Carvajal, Anna L Gert, Eva-Marie von Butler, Pelin Kömürlüoğlu, Peter König
{"title":"Combining EEG and eye-tracking in virtual reality: Obtaining fixation-onset event-related potentials and event-related spectral perturbations.","authors":"Debora Nolte, Marc Vidal De Palol, Ashima Keshava, John Madrid-Carvajal, Anna L Gert, Eva-Marie von Butler, Pelin Kömürlüoğlu, Peter König","doi":"10.3758/s13414-024-02917-3","DOIUrl":"https://doi.org/10.3758/s13414-024-02917-3","url":null,"abstract":"<p><p>Extensive research conducted in controlled laboratory settings has prompted an inquiry into how results can be generalized to real-world situations influenced by the subjects' actions. Virtual reality lends itself ideally to investigating complex situations but requires accurate classification of eye movements, especially when combining it with time-sensitive data such as EEG. We recorded eye-tracking data in virtual reality and classified it into gazes and saccades using a velocity-based classification algorithm, and we cut the continuous data into smaller segments to deal with varying noise levels, as introduced in the REMoDNav algorithm. Furthermore, we corrected for participants' translational movement in virtual reality. Various measures, including visual inspection, event durations, and the velocity and dispersion distributions before and after gaze onset, indicate that we can accurately classify the continuous, free-exploration data. Combining the classified eye-tracking with the EEG data, we generated fixation-onset event-related potentials (ERPs) and event-related spectral perturbations (ERSPs), providing further evidence for the quality of the eye-movement classification and timing of the onset of events. Finally, investigating the correlation between single trials and the average ERP and ERSP identified that fixation-onset ERSPs are less time sensitive, require fewer repetitions of the same behavior, and are potentially better suited to study EEG signatures in naturalistic settings. We modified, designed, and tested an algorithm that allows the combination of EEG and eye-tracking data recorded in virtual reality.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141560430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xin Huang, Brian W L Wong, Hezul Tin-Yan Ng, Werner Sommer, Olaf Dimigen, Urs Maurer
{"title":"Neural mechanism underlying preview effects and masked priming effects in visual word processing.","authors":"Xin Huang, Brian W L Wong, Hezul Tin-Yan Ng, Werner Sommer, Olaf Dimigen, Urs Maurer","doi":"10.3758/s13414-024-02904-8","DOIUrl":"https://doi.org/10.3758/s13414-024-02904-8","url":null,"abstract":"<p><p>Two classic experimental paradigms - masked repetition priming and the boundary paradigm - have played a pivotal role in understanding the process of visual word recognition. Traditionally, these paradigms have been employed by different communities of researchers, with their own long-standing research traditions. Nevertheless, a review of the literature suggests that the brain-electric correlates of word processing established with both paradigms may show interesting similarities, in particular with regard to the location, timing, and direction of N1 and N250 effects. However, as of yet, no direct comparison has been undertaken between the two paradigms. In the current study, we used combined eye-tracking/EEG to perform such a within-subject comparison using the same materials (single Chinese characters) as stimuli. To facilitate direct comparisons, we used a simplified version of the boundary paradigm - the single word boundary paradigm. Our results show the typical early repetition effects of N1 and N250 for both paradigms. However, repetition effects in N250 (i.e., a reduced negativity following identical-word primes/previews as compared to different-word primes/previews) were larger with the single word boundary paradigm than with masked priming. For N1 effects, repetition effects were similar across the two paradigms, showing a larger N1 after repetitions as compared to alternations. Therefore, the results indicate that at the neural level, a briefly presented and masked foveal prime produces qualitatively similar facilitatory effects on visual word recognition as a parafoveal preview before a single saccade, although such effects appear to be stronger in the latter case.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}