Michele Vicovaro, Riccardo Boscariol, Mario Dalmaso
{"title":"A SNARC-like effect for visual speed.","authors":"Michele Vicovaro, Riccardo Boscariol, Mario Dalmaso","doi":"10.3758/s13414-025-03012-x","DOIUrl":"https://doi.org/10.3758/s13414-025-03012-x","url":null,"abstract":"<p><p>Numerical and nonnumerical magnitudes can be represented along a hypothetical left-to-right continuum, where smaller quantities are associated with the left side and larger quantities with the right side. However, these representations are flexible, as their intensity and direction can be modulated by various contextual cues and task demands. In four experiments, we investigated the spatial representation of visual speed. Visual speed is inherently connected to physical space and spatial directions, making it distinct from other magnitudes. With this in mind, we explored whether the spatial representation of visual speed aligns with the typical left-to-right orientation or is influenced dynamically by the movement direction of the stimuli. Participants compared the speed of random dot kinematograms to a reference speed using lateralised response keys. On each trial, all dots moved consistently in one single direction, which varied across the experiments and could also vary from trial to trial in Experiments 2 and 4. The dot movements were left-to-right (Experiment 1), random across a 360° spectrum (Experiment 2), right-to-left (Experiment 3), and random left-to-right or right-to-left (Experiment 4). The results supported a relatively stable left-to-right spatial representation of speed (Experiments 1-3), which was compromised by mutable motion directions along the horizontal axis (Experiment 4). We suggest that representing stimuli as belonging to a single set rather than different sets, may be crucial for the emergence of spatial representations of quantities.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143069890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ascensión Pagán, Federica Degno, Sara V Milledge, Richard D Kirkden, Sarah J White, Simon P Liversedge, Kevin B Paterson
{"title":"Aging and word predictability during reading: Evidence from eye movements and fixation-related potentials.","authors":"Ascensión Pagán, Federica Degno, Sara V Milledge, Richard D Kirkden, Sarah J White, Simon P Liversedge, Kevin B Paterson","doi":"10.3758/s13414-024-02981-9","DOIUrl":"https://doi.org/10.3758/s13414-024-02981-9","url":null,"abstract":"<p><p>The use of context to facilitate the processing of words is recognized as a hallmark of skilled reading. This capability is also hypothesized to change with older age because of cognitive changes across the lifespan. However, research investigating this issue using eye movements or event-related potentials (ERPs) has produced conflicting findings. Specifically, whereas eye-movement studies report larger context effects for older than younger adults, ERP findings suggest that context effects are diminished or delayed for older readers. Crucially, these contrary findings may reflect methodological differences, including use of unnatural sentence displays in ERP research. To address these limitations, we used a coregistration technique to record eye movements (EMs) and fixation-related potentials (FRPs) simultaneously while 44 young adults (18-30 years) and 30 older adults (65+ years) read sentences containing a target word that was strongly or weakly predicted by prior context. Eye-movement analyses were conducted over all data (full EM dataset) and only data matching FRPs. FRPs were analysed to capture early and later components 70-900 ms following fixation-onset on target words. Both eye-movement datasets and early FRPs showed main effects of age group and context, while the full EM dataset and later FRPs revealed larger context effects for older adults. We argue that, by using coregistration methods to address limitations of earlier ERP research, our experiment provides compelling complementary evidence from eye movements and FRPs that older adults rely more on context to integrate words during reading.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The near-miss to cross-modal commutativity.","authors":"Jürgen Heller","doi":"10.3758/s13414-025-03014-9","DOIUrl":"https://doi.org/10.3758/s13414-025-03014-9","url":null,"abstract":"<p><p>This paper is a follow-up to Ellermeier, Kattner, and Raum (2021, Attention, Perception, & Psychophysics, 83, 2955-2967), and provides a reanalysis of their data on cross-modal commutativity from a Bayesian perspective, and a theory-based analysis grounded on a recently suggested extension of a global psychophysical approach to cross-modal judgments (Heller, 2021, Psychological Review, 128, 509-524). This theory assumes that stimuli are judged against respondent-generated internal references that are modality-specific and potentially role-dependent (i.e., sensitive to whether they pertain to the standard or the variable stimulus in the performed cross-modal magnitude production task). While the Bayesian tests turn out to be inconclusive, the theory-based analysis reveals a massive and systematic role-dependence of internal references. This leads to predicting small but systematic deviations from cross-modal commutativity, which are in line with the observed data. In analogy to a term coined in the context of Weber's law, this phenomenon is referred to as near-miss to cross-modal commutativity. The presented theory offers a psychological rationale explaining this phenomenon, and opens up an innovative approach to studying cross-modal perception.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experience-driven suppression of irrelevant distractor locations is context dependent.","authors":"Ayala S Allon, Andrew B Leber","doi":"10.3758/s13414-024-03009-y","DOIUrl":"https://doi.org/10.3758/s13414-024-03009-y","url":null,"abstract":"<p><p>Humans can learn to attentionally suppress salient, irrelevant information when it consistently appears at a predictable location. While this ability confers behavioral benefits by reducing distraction, the full scope of its utility is unknown. As people locomote and/or shift between task contexts, known-to-be-irrelevant locations may change from moment to moment. Here we assessed a context-dependent account of learned suppression: can individuals flexibly update the locations they suppress, from trial to trial, as a function of task context? Participants searched for a shape target in displays that sometimes contained a salient, irrelevant color singleton distractor. When one scene category was presented in the background (e.g., forests), the distractor had a greater probability of appearing in one display location than the others; for another scene category (e.g., cities), we used a different high-probability location. Results in Experiments 1 and 2 (and in the Online Supplementary Material) failed to show any context-dependent suppression effects, consistent with earlier work. However, in Experiments 3 and 4, we reinforced the separation between task contexts by using distinct sets of shape and color stimuli as well as distinct kinds of reported features (line orientation vs. gap judgment). Results now showed robust task-dependent signatures of learned spatial suppression and did not appear to be tied to explicit awareness of the relationship between context and high-probability distractor location. Overall, these results reveal a mechanism of learned spatial suppression that is flexible and sensitive to task contexts, albeit one that requires sufficient processing of these contexts.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Temporal dynamics of activation and suppression in a spatial Stroop task: A distribution analysis on gaze and arrow targets.","authors":"Yoshihiko Tanaka, Takato Oyama, Kenta Ishikawa, Matia Okubo","doi":"10.3758/s13414-025-03021-w","DOIUrl":"https://doi.org/10.3758/s13414-025-03021-w","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Madison, Chloe Callahan-Flintoft, Steven M Thurman, Russell A Cohen Hoffing, Jonathan Touryan, Anthony J Ries
{"title":"Fixation-related potentials during a virtual navigation task: The influence of image statistics on early cortical processing.","authors":"Anna Madison, Chloe Callahan-Flintoft, Steven M Thurman, Russell A Cohen Hoffing, Jonathan Touryan, Anthony J Ries","doi":"10.3758/s13414-024-03002-5","DOIUrl":"https://doi.org/10.3758/s13414-024-03002-5","url":null,"abstract":"<p><p>Historically, electrophysiological correlates of scene processing have been studied with experiments using static stimuli presented for discrete timescales where participants maintain a fixed eye position. Gaps remain in generalizing these findings to real-world conditions where eye movements are made to select new visual information and where the environment remains stable but changes with our position and orientation in space, driving dynamic visual stimulation. Co-recording of eye movements and electroencephalography (EEG) is an approach to leverage fixations as time-locking events in the EEG recording under free-viewing conditions to create fixation-related potentials (FRPs), providing a neural snapshot in which to study visual processing under naturalistic conditions. The current experiment aimed to explore the influence of low-level image statistics-specifically, luminance and a metric of spatial frequency (slope of the amplitude spectrum)-on the early visual components evoked from fixation onsets in a free-viewing visual search and navigation task using a virtual environment. This research combines FRPs with an optimized approach to remove ocular artifacts and deconvolution modeling to correct for overlapping neural activity inherent in any free-viewing paradigm. The results suggest that early visual components-namely, the lambda response and N1-of the FRPs are sensitive to luminance and spatial frequency around fixation, separate from modulation due to underlying differences in eye-movement characteristics. Together, our results demonstrate the utility of studying the influence of image statistics on FRPs using a deconvolution modeling approach to control for overlapping neural activity and oculomotor covariates.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Courtney Guida, Minwoo J B Kim, Olivia A Stibolt, Alyssa Lompado, James E Hoffman
{"title":"Correction to: The N400 component reflecting semantic and repetition priming of visual scenes is suppressed during the attentional blink.","authors":"Courtney Guida, Minwoo J B Kim, Olivia A Stibolt, Alyssa Lompado, James E Hoffman","doi":"10.3758/s13414-024-03007-0","DOIUrl":"https://doi.org/10.3758/s13414-024-03007-0","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of dynamic shape cues in the recognition of emotion from naturalistic body motion.","authors":"Erika Ikeda, Nathan Destler, Jacob Feldman","doi":"10.3758/s13414-024-02990-8","DOIUrl":"https://doi.org/10.3758/s13414-024-02990-8","url":null,"abstract":"<p><p>Human observers can often judge emotional or affective states from bodily motion, even in the absence of facial information, but the mechanisms underlying this inference are not completely understood. Important clues come from the literature on \"biological motion\" using point-light displays (PLDs), which convey human action, and possibly emotion, apparently on the basis of body movements alone. However, most studies have used simplified and often exaggerated displays chosen to convey emotions as clearly as possible. In the current study we aim to study emotion interpretation using more naturalistic stimuli, which we draw from narrative films, security footage, and other sources not created for experimental purposes. We use modern algorithmic methods to extract joint positions, from which we create three display types intended to probe the nature of the cues observers use to interpret emotions: PLDs; stick figures, which convey \"skeletal\" information more overtly; and a control condition in which joint positions are connected in an anatomically incorrect manner. The videos depicted a range of emotions, including fear, joy, nurturing, anger, sadness, and determination. Subjects were able to estimate the depicted emotion with a high degree of reliability and accuracy, most effectively from stick figures, somewhat less so for PLDs, and least for the control condition. These results confirm that people can interpret emotion from naturalistic body movements alone, and suggest that the mechanisms underlying this interpretation rely heavily on skeletal representations of dynamic shape.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143016896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hannah Cormier, Christine D Tsang, Stephen C Van Hedger
{"title":"The role of attention in eliciting a musically induced visual motion aftereffect.","authors":"Hannah Cormier, Christine D Tsang, Stephen C Van Hedger","doi":"10.3758/s13414-024-02985-5","DOIUrl":"https://doi.org/10.3758/s13414-024-02985-5","url":null,"abstract":"<p><p>Previous studies have reported visual motion aftereffects (MAEs) following prolonged exposure to auditory stimuli depicting motion, such as ascending or descending musical scales. The role of attention in modulating these cross-modal MAEs, however, remains unclear. The present study manipulated the level of attention directed to musical scales depicting motion and assessed subsequent changes in MAE strength. In Experiment 1, participants either responded to an occasional secondary auditory stimulus presented concurrently with the musical scales (diverted-attention condition) or focused on the scales (control condition). In Experiment 2 we increased the attentional load of the task by having participants perform an auditory 1-back task in one ear, while the musical scales were played in the other. Visual motion perception in both experiments was assessed via random dot kinematograms (RDKs) varying in motion coherence. Results from Experiment 1 replicated prior work, in that extended listening to ascending scales resulted in a greater likelihood of judging RDK motion as descending, in line with the MAE. In contrast, the MAE was eliminated in Experiment 2. These results were internally replicated using an in-lab, within-participant design (Experiment 3). These results suggest that attention is necessary in eliciting an auditory-induced visual MAE.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142985278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cheolhwan Kim, Nahyeon Lee, Koeun Jung, Suk Won Han
{"title":"The degree of parallel/serial processing affects stimulus-driven and memory-driven attentional capture: Evidence for the attentional window account.","authors":"Cheolhwan Kim, Nahyeon Lee, Koeun Jung, Suk Won Han","doi":"10.3758/s13414-024-03003-4","DOIUrl":"https://doi.org/10.3758/s13414-024-03003-4","url":null,"abstract":"<p><p>The issue of whether a salient stimulus in the visual field captures attention in a stimulus-driven manner has been debated for several decades. The attentional window account proposed to resolve this issue by claiming that a salient stimulus captures attention and interferes with target processing only when an attentional window is set wide enough to encompass both the target and the salient distractor. By contrast, when a small attentional window is serially shifted among individual stimuli to find a target, no capture is found. Research findings both support and challenge this attentional window account. However, in these studies, the attentional window size was improperly estimated, necessitating a re-evaluation of the account. Here, using a recently developed visual search paradigm, we investigated whether visual stimuli were processed in a parallel or a serial manner. We found significant attentional capture when multiple stimuli were processed in parallel within a large attentional window. By contrast, when a small window had to be serially shifted, no capture was found. We conclude that the attentional window account can be a useful framework to resolve the widespread debate regarding stimulus-driven attentional capture.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}