{"title":"How expectations alter search performance","authors":"Natalie A. Paquette, Joseph Schmidt","doi":"10.3758/s13414-025-03022-9","DOIUrl":"10.3758/s13414-025-03022-9","url":null,"abstract":"<div><p>We assessed how expected search difficulty impacts search performance when expectations match and do not match reality. Expectations were manipulated using a blocked design (75% of trials presented at the expected difficulty; target–distractor similarity increased with difficulty). Expectancy was assessed by examining the change in search performance between trials with accurate expectations and easier-than-expected or harder-than-expected trials, matched for search difficulty. Observers searched for Landolt-C targets (Exp-1) or real-world objects (Exp-2). Increased difficulty resulted in reduced accuracy, increased RT and object dwell times (targets and distractors; both experiments), and reduced guidance (Exp-2). Relative to the same level of search difficulty and when expectations were accurate, harder-than-expected search reduced accuracy, RT, and target object dwell times (Exp-1). Whereas easier-than-expected search increased RT and target dwell times (Exp-1). While Experiment 2 showed somewhat muted expectancy effects, easier-than-expected search replicated the increased RT observed in Exp-1, with an additional guidance decrement and increased distractor dwell time. These results demonstrate that expectations shift search performance toward the expected difficulty level. Additionally, post hoc analyses revealed that observers who experience larger difficulty effects also experience larger expectancy effects in RT, guidance, and target dwell time.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"334 - 353"},"PeriodicalIF":1.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interference from semantically distracting sounds in action scene search","authors":"Tomoki Maezawa, Miho Kiyosawa, Jun I. Kawahara","doi":"10.3758/s13414-025-03023-8","DOIUrl":"10.3758/s13414-025-03023-8","url":null,"abstract":"<div><p>Research on visual searching has highlighted the role of crossmodal interactions between semantically congruent visual and auditory stimuli. Typically, such sounds facilitate performance. Conversely, semantically incongruent sounds may impair visual search efficiency for action scenes, though consensus has yet to be reached. This study investigated whether interference effects occur within the action-scene search paradigm. Participants performed a search task involving four simultaneously presented video stimuli, accompanied by one of three sound conditions: sound congruent with the target, congruent with a distractor, or a control sound. Auditory interference was observed, though it was relatively weak and varied across conditions rather than being simply present or absent. The observed variability in interference effects may align with the established view that observers typically ignore semantic distractor information in goal-directed searches, except in cases where the strength of target designation is compromised. These findings offer insights into the complex interplay between auditory and visual stimuli in action scene searches, suggesting that these underlying mechanisms may also apply to other paradigms, such as those involving conventional real object searches.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"498 - 510"},"PeriodicalIF":1.7,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-025-03023-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Characterizing the neural underpinnings of attention in the real world via co-registration of eye movements and EEG/MEG: An introduction to the special issue","authors":"Elizabeth Schotter, Brennan Payne, David Melcher","doi":"10.3758/s13414-025-03017-6","DOIUrl":"10.3758/s13414-025-03017-6","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 1","pages":"1 - 4"},"PeriodicalIF":1.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fredrik Allenmark, Hao Yu, Hermann J. Müller, Zhuanghua Shi, Christian Frings
{"title":"Distractor-response binding influences visual search","authors":"Fredrik Allenmark, Hao Yu, Hermann J. Müller, Zhuanghua Shi, Christian Frings","doi":"10.3758/s13414-025-03016-7","DOIUrl":"10.3758/s13414-025-03016-7","url":null,"abstract":"<div><p>Intertrial priming effects in visual search and action control suggest the involvement of binding and retrieval processes. However, the role of distractor-response binding (DRB) in visual search has been largely overlooked, and the specific processing stage within the functional architecture of attentional guidance where the DRB occurs remains unclear. To address these gaps, we implemented two search tasks, where participants responded based on a separate feature from the one defining the target. We kept the target dimension consistent across trials while varying the color and shape of the distractor. Moreover, we either repeated or randomized the target position in different sessions. Our results revealed a pronounced response priming, a difference between trials where the response changed versus repeated, and importantly this response priming was stronger when distractor features or the target position were repeated than when they changed. These insights affirm the presence of DRB during visual search and support the framework of binding and retrieval in action control as a basis for observed intertrial priming effects related to distractor features. All data are available at: https://github.com/msenselab/distractor_binding.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"316 - 333"},"PeriodicalIF":1.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The haptic cues humans use to sense small numbers of objects in a box","authors":"Ilja Frissen, Shuangshuang Xiao, Nurlan Kabdyshev, Moldir Zabirova, Mounia Ziat","doi":"10.3758/s13414-025-03011-y","DOIUrl":"10.3758/s13414-025-03011-y","url":null,"abstract":"<div><p>Humans can acquire behaviorally relevant information about the contents of a container through their sense of touch. A container poses a challenge to the haptic sense as it creates an intermediary between its contents and the observer. Despite this challenge, several studies have shown that individuals are particularly adept at estimating small numbers of objects in an opaque box solely through tactile interaction. This study aimed to identify which physical cues contribute to this ability by systematically attenuating (Experiment 1) or augmenting (Experiment 2) the cues of rolling vibrations, impact, and weight. Rolling cues were manipulated by varying the friction between the objects and the container's floor. Impact cues were manipulated by softening or hardening the container’s internal wall. Weight cues were controlled by equalizing the total weight of the contents, regardless of the number of objects. The findings suggest that rolling vibrations are the primary cues, followed by impact cues, while weight plays only a minor role.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"577 - 587"},"PeriodicalIF":1.7,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michele Vicovaro, Riccardo Boscariol, Mario Dalmaso
{"title":"A SNARC-like effect for visual speed.","authors":"Michele Vicovaro, Riccardo Boscariol, Mario Dalmaso","doi":"10.3758/s13414-025-03012-x","DOIUrl":"https://doi.org/10.3758/s13414-025-03012-x","url":null,"abstract":"<p><p>Numerical and nonnumerical magnitudes can be represented along a hypothetical left-to-right continuum, where smaller quantities are associated with the left side and larger quantities with the right side. However, these representations are flexible, as their intensity and direction can be modulated by various contextual cues and task demands. In four experiments, we investigated the spatial representation of visual speed. Visual speed is inherently connected to physical space and spatial directions, making it distinct from other magnitudes. With this in mind, we explored whether the spatial representation of visual speed aligns with the typical left-to-right orientation or is influenced dynamically by the movement direction of the stimuli. Participants compared the speed of random dot kinematograms to a reference speed using lateralised response keys. On each trial, all dots moved consistently in one single direction, which varied across the experiments and could also vary from trial to trial in Experiments 2 and 4. The dot movements were left-to-right (Experiment 1), random across a 360° spectrum (Experiment 2), right-to-left (Experiment 3), and random left-to-right or right-to-left (Experiment 4). The results supported a relatively stable left-to-right spatial representation of speed (Experiments 1-3), which was compromised by mutable motion directions along the horizontal axis (Experiment 4). We suggest that representing stimuli as belonging to a single set rather than different sets, may be crucial for the emergence of spatial representations of quantities.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143069890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ascensión Pagán, Federica Degno, Sara V. Milledge, Richard D. Kirkden, Sarah J. White, Simon P. Liversedge, Kevin B. Paterson
{"title":"Aging and word predictability during reading: Evidence from eye movements and fixation-related potentials","authors":"Ascensión Pagán, Federica Degno, Sara V. Milledge, Richard D. Kirkden, Sarah J. White, Simon P. Liversedge, Kevin B. Paterson","doi":"10.3758/s13414-024-02981-9","DOIUrl":"10.3758/s13414-024-02981-9","url":null,"abstract":"<div><p>The use of context to facilitate the processing of words is recognized as a hallmark of skilled reading. This capability is also hypothesized to change with older age because of cognitive changes across the lifespan. However, research investigating this issue using eye movements or event-related potentials (ERPs) has produced conflicting findings. Specifically, whereas eye-movement studies report larger context effects for older than younger adults, ERP findings suggest that context effects are diminished or delayed for older readers. Crucially, these contrary findings may reflect methodological differences, including use of unnatural sentence displays in ERP research. To address these limitations, we used a coregistration technique to record eye movements (EMs) and fixation-related potentials (FRPs) simultaneously while 44 young adults (18–30 years) and 30 older adults (65+ years) read sentences containing a target word that was strongly or weakly predicted by prior context. Eye-movement analyses were conducted over all data (full EM dataset) and only data matching FRPs. FRPs were analysed to capture early and later components 70–900 ms following fixation-onset on target words. Both eye-movement datasets and early FRPs showed main effects of age group and context, while the full EM dataset and later FRPs revealed larger context effects for older adults. We argue that, by using coregistration methods to address limitations of earlier ERP research, our experiment provides compelling complementary evidence from eye movements and FRPs that older adults rely more on context to integrate words during reading.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 1","pages":"50 - 75"},"PeriodicalIF":1.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.3758/s13414-024-02981-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The near-miss to cross-modal commutativity","authors":"Jürgen Heller","doi":"10.3758/s13414-025-03014-9","DOIUrl":"10.3758/s13414-025-03014-9","url":null,"abstract":"<div><p>This paper is a follow-up to Ellermeier, Kattner, and Raum (2021, Attention, Perception, & Psychophysics, 83, 2955–2967), and provides a reanalysis of their data on cross-modal commutativity from a Bayesian perspective, and a theory-based analysis grounded on a recently suggested extension of a global psychophysical approach to cross-modal judgments (Heller, 2021, Psychological Review, 128, 509-524). This theory assumes that stimuli are judged against respondent-generated internal references that are modality-specific and potentially role-dependent (i.e., sensitive to whether they pertain to the standard or the variable stimulus in the performed cross-modal magnitude production task). While the Bayesian tests turn out to be inconclusive, the theory-based analysis reveals a massive and systematic role-dependence of internal references. This leads to predicting small but systematic deviations from cross-modal commutativity, which are in line with the observed data. In analogy to a term coined in the context of Weber’s law, this phenomenon is referred to as near-miss to cross-modal commutativity. The presented theory offers a psychological rationale explaining this phenomenon, and opens up an innovative approach to studying cross-modal perception.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"619 - 636"},"PeriodicalIF":1.7,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Experience-driven suppression of irrelevant distractor locations is context dependent","authors":"Ayala S. Allon, Andrew B. Leber","doi":"10.3758/s13414-024-03009-y","DOIUrl":"10.3758/s13414-024-03009-y","url":null,"abstract":"<div><p>Humans can learn to attentionally suppress salient, irrelevant information when it consistently appears at a predictable location. While this ability confers behavioral benefits by reducing distraction, the full scope of its utility is unknown. As people locomote and/or shift between task contexts, known-to-be-irrelevant locations may change from moment to moment. Here we assessed a context-dependent account of learned suppression: can individuals flexibly update the locations they suppress, from trial to trial, as a function of task context? Participants searched for a shape target in displays that sometimes contained a salient, irrelevant color singleton distractor. When one scene category was presented in the background (e.g., forests), the distractor had a greater probability of appearing in one display location than the others; for another scene category (e.g., cities), we used a different high-probability location. Results in Experiments 1 and 2 (and in the Online Supplementary Material) failed to show any context-dependent suppression effects, consistent with earlier work. However, in Experiments 3 and 4, we reinforced the separation between task contexts by using distinct sets of shape and color stimuli as well as distinct kinds of reported features (line orientation vs. gap judgment). Results now showed robust task-dependent signatures of learned spatial suppression and did not appear to be tied to explicit awareness of the relationship between context and high-probability distractor location. Overall, these results reveal a mechanism of learned spatial suppression that is flexible and sensitive to task contexts, albeit one that requires sufficient processing of these contexts.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"285 - 302"},"PeriodicalIF":1.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Temporal dynamics of activation and suppression in a spatial Stroop task: A distribution analysis on gaze and arrow targets","authors":"Yoshihiko Tanaka, Takato Oyama, Kenta Ishikawa, Matia Okubo","doi":"10.3758/s13414-025-03021-w","DOIUrl":"10.3758/s13414-025-03021-w","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 2","pages":"719 - 719"},"PeriodicalIF":1.7,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143054312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}