{"title":"Transfer of statistical regularity in visual search","authors":"Gabriel Siegel, Richard A. Abrams","doi":"10.3758/s13414-025-03117-3","DOIUrl":"10.3758/s13414-025-03117-3","url":null,"abstract":"<div><p>People are able to take advantage of statistical regularities in scenes, using those regularities to bias their attention to the likely locations of items of interest. People also seem able to learn object-centered statistical regularities—for example, that the top of an object is the most likely target location. We show here that such regularities transfer to new spatial locations even in the absence of any explicit object and hence may not be truly object centered. Additionally, when transfer is measured on a new object with a new shape—the transfer is substantially reduced. The findings suggest that statistical information about likely target locations can be encoded in a configuration-based reference frame that is sensitive to the context established by the objects in the scene. The results lead to a new interpretation of earlier findings and have important implications for understanding the coordinate systems in which attentional priorities are represented.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 6","pages":"1852 - 1863"},"PeriodicalIF":1.7,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiu Han, Marco Gandolfo, Jules van Dommelen, San Schoenmacker, Klemens Drobnicki, Marius V. Peelen
{"title":"Knowledge of effort modulates visual memory biases for body postures","authors":"Qiu Han, Marco Gandolfo, Jules van Dommelen, San Schoenmacker, Klemens Drobnicki, Marius V. Peelen","doi":"10.3758/s13414-025-03112-8","DOIUrl":"10.3758/s13414-025-03112-8","url":null,"abstract":"<div><p>The visual memory of others’ postures has been proposed to be shaped by knowledge and expectations. For example, the visual memory of a lifted arm was recently shown to be biased downward, suggesting that observers predicted the upcoming state of the arm based on knowledge of the effort required to hold the arm up against gravity. Alternatively, the downward bias for body postures could reflect an automatic normalization toward the most frequently observed arm position, with arms more often observed in a low position. Here, in three experiments, we provide evidence that the downward bias is flexibly modulated by knowledge of effort. In Experiment 1, we found a stronger downward bias for arm postures that are relatively effortful (lifting an arm above the shoulders while standing) compared with arm postures that are less effortful (lifting an arm above the chest while lying down). In Experiment 2, we found a stronger downward bias when the actor was standing (viewed from the side) than when the actor was lying down (viewed from above), even though the arm postures were visually identical. Moreover, dividing attention during the encoding stage reduced the bias, showing that attentive processing of the stimulus was required for the bias to emerge. Finally, in Experiment 3, we found that concurrently executing the observed posture during the visual memory task did not further increase the downward bias. Together, these findings demonstrate a high-level cognitive influence on the visual memory for body postures.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 6","pages":"1994 - 2006"},"PeriodicalIF":1.7,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144487210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Pavan, Seyma Koc Yilmaz, Hulusi Kafaligonul, Julia Föcker, Mark W. Greenlee
{"title":"Visual short-term memory in action and non-action video game players: A focus on short and long delay intervals","authors":"Andrea Pavan, Seyma Koc Yilmaz, Hulusi Kafaligonul, Julia Föcker, Mark W. Greenlee","doi":"10.3758/s13414-025-03118-2","DOIUrl":"10.3758/s13414-025-03118-2","url":null,"abstract":"<div><p>Previous research suggests that action video game players (AVGPs) often outperform non-action video game players (NAVGPs) in cognitive tasks. This study compared the precision of visual short-term memory (VSTM) for motion direction between AVGPs and age- and gender-matched NAVGPs. Participants memorized the direction of random dot kinematograms (RDKs) presented sequentially (one to four per trial) and reproduced the direction of a probed RDK after either a short (0.5 s) or long (3 s) delay. Initial training ensured that all participants reached a predefined performance level with a single stimulus, with AVGPs requiring fewer training blocks to meet this criterion. While no significant group differences emerged at short delays, AVGPs showed significantly higher raw precision than NAVGPs in long-delay trials involving a single stimulus. However, this group difference did not reach significance in the corresponding precision parameter estimated by the Standard Mixture Model. To investigate memory-encoding strategies, we applied the resource-rational model (RRM), which formalizes the trade-off between behavioral accuracy and neural cost. Model estimates showed that NAVGPs placed greater weight on neural cost relative to behavioral benefits during encoding, particularly in long-delay trials, leading to reduced precision. In contrast, AVGPs allocated memory resources more efficiently, maintaining higher precision over extended intervals. These findings suggest that AVGPs adopt more effective encoding strategies, dynamically adjusting resource allocation to task demands. This study highlights the utility of resource-rational modeling for understanding cognitive performance differences linked to action video game experience. Future research could further explore how these strategies translate across different cognitive domains.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 6","pages":"1915 - 1938"},"PeriodicalIF":1.7,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implicit effect of visual long-term memory for nonverbal objects on recognition judgment","authors":"Tomoe Masuoka, Megumi Nishiyama, Yuna Tsurusaki, Takafumi Terasawa","doi":"10.3758/s13414-025-03108-4","DOIUrl":"10.3758/s13414-025-03108-4","url":null,"abstract":"<div><p>This study uses an indirect recognition procedure to examine whether prior exposure to nonverbal visual objects affects recognition judgments in later, unrelated recognition tests. We also examined the effect of matching operations between study and test on recognition judgments. The experiment consisted of two sessions. The first session was an incidental learning task: Each object was presented twice, and participants were asked to count the number of corners of the presented object. In the second session after 3 weeks, participants performed the same task as in the first session and then performed an unexpected recognition test. In this test, participants were asked to identify whether the presented object had appeared in the second session. To unify the operation between study and test, some participants were required to count the number of corners of the presented object before the recognition judgment. The results revealed that recognition performance for the objects that appeared in the first session was significantly different from that of objects that had not appeared, even when participants were not asked to recall the episode of the first session when performing the recognition test. Although the results of the effect of the matching operation suggested a negative effect on recognition, the results were unclear. This finding indicates that representations for nonverbal objects are preserved for at least 3 weeks. This also highlights the need to consider the implicit effect of a brief prior experience on recognition judgments.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 6","pages":"1841 - 1851"},"PeriodicalIF":1.7,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Number blindness in human vision","authors":"James Negen","doi":"10.3758/s13414-025-03113-7","DOIUrl":"10.3758/s13414-025-03113-7","url":null,"abstract":"<div><p>There is an ongoing controversy over whether human vision first estimates area and number, deriving our sense of density via division, or if it first estimates area and density, deriving our sense of number via multiplication. If number and area are both primary independent dimensions of visual perception then we should observe cross-magnitude influence between them in a simple choice task, especially if that influence would improve performance and this is explicitly explained to the participants. In contrast, here we show that human vision exhibits a specific kind of number blindness: performance on an area-choice task (which of these rectangles is larger?) is not improved by the addition of a perfectly correlated number signal (the larger one always has more dots on it) that creates equivalent density – even when explanations, reminders, and accurate feedback are given to the participants. This replicated across two experiments (N = 82, 122) with slightly different stimuli. Control analyses with brightness in Experiment 1 indicate that this is not a general resistance to the predicted cross-magnitude influence. This indicates that density, not number, is the primary independent perceptual dimension in human vision.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 6","pages":"1939 - 1947"},"PeriodicalIF":1.7,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331800/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perceptual grouping of individuals in social triads.","authors":"Luowei Yan, Clara Colombatto, Jelena Ristic","doi":"10.3758/s13414-025-03119-1","DOIUrl":"10.3758/s13414-025-03119-1","url":null,"abstract":"<p><p>Human life is built around the need for group membership and social connections. Recent research shows that small interactive groups of two and three individuals (i.e., dyads and triads) are found faster in visual search tasks when group members are facing toward versus away from one another. This 'facing advantage' may reflect the involvement of perceptual grouping processes, with facing groups perceived as a unified whole. Here, we tested this grouping hypothesis by measuring search performance for individuals who were positioned within facing or non-facing groups of three. If facing triads were perceptually grouped, individuation of group members in those triads should be hindered. Participants searched for a target individual, a person raising a fist or a person raising a pointing finger, who was positioned in one of four or eight facing or non-facing triads. The data indicated that while the search for target individuals pointing a finger was overall facilitated, it was specifically hindered when this person was positioned within a facing compared to a non-facing group. These results suggest that the perception of social groups may be attuned to the overall configuration of the group, but also to more sophisticated social communicative signals of individual group members.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensemble perception requires attention","authors":"Ruth Kimchi, Shahar Sabary","doi":"10.3758/s13414-025-03111-9","DOIUrl":"10.3758/s13414-025-03111-9","url":null,"abstract":"<div><p>The question of whether ensemble perception can take place without attention is unresolved. We examined this issue in four experiments, using an inattention paradigm that provides an on-line, indirect measure of processing of unattended stimuli. Participants performed an attention-demanding change-detection task on a small matrix presented on a background of task-irrelevant ensemble consisting of circles of different size (Experiment 1) or oriented lines (Experiments 2–4). Independently of any change in the matrix, the ensemble mean changed or stayed the same between successive displays on each trial. We hypothesized that if ensemble mean is extracted under inattention, changes in the ensemble mean would produce congruency effects on the speed or accuracy of performance in the matrix change judgments, such that performance is faster or more accurate on congruent than incongruent trials. The results showed that changes in the ensemble mean size or mean orientation produced no congruency effects on performance of the target change-detection task. Also, participants could not report, when probed with surprise questions, whether or not the ensemble mean changed. When participants attended to the ensemble, their accuracy of explicit reports about a change were significantly above chance. These results are seen to suggest that ensemble perception requires attention. The differences between the present study and previous ones, concerning the conditions and definition of unattended and their implication for understanding the relation between ensemble perception and attention, are discussed.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 6","pages":"1888 - 1903"},"PeriodicalIF":1.7,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12331843/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144327839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How prevalence expectations and feedback impact decision-making in person searches","authors":"Chenxin Yu, Kara N. Moore, Dara U. Zwemer","doi":"10.3758/s13414-025-03107-5","DOIUrl":"10.3758/s13414-025-03107-5","url":null,"abstract":"<div><p>Searching for missing or wanted persons is a challenging task that requires sustained attention and active scanning for a difficult-to-recognize stimulus (i.e., an unfamiliar face). Given the naturally low prevalence of missing or wanted persons, people may have low expectations of encountering them in their midst. Understanding how their expectations, combined with feedback and experience, influence search performance is critical for improving real-world search efforts. We manipulated prevalence expectations (40% vs. 2%) and trial-level performance feedback (present vs. absent) in a visual search task for unfamiliar target faces. Critically, the target persons never appeared during the task. We examined how performance changed over time. Among participants who did not receive feedback, those with high-prevalence expectations made more false alarms and terminated their searches earlier than those with low-prevalence expectations. In contrast, participants who received feedback were not affected by prevalence expectations. While prevalence expectations had limited impact on search behavior, feedback enhanced participants’ ability to align their expectations with the true prevalence rate more effectively.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 7","pages":"2146 - 2164"},"PeriodicalIF":1.7,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144310870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark W. Becker, Jeff Moher, Derrek T. Montalvo, Andrew Rodriguez
{"title":"Salient distractors influence information accrual rather than quitting threshold in visual search","authors":"Mark W. Becker, Jeff Moher, Derrek T. Montalvo, Andrew Rodriguez","doi":"10.3758/s13414-025-03104-8","DOIUrl":"10.3758/s13414-025-03104-8","url":null,"abstract":"<div><p>Moher (<i>Psychological Science</i>, <i>31</i>[1], 31–42, 2020) recently reported that adding a salient distractor (SD) to a visual search display results in more misses and faster target-absent reaction times, a pattern interpreted as a reduction in the quitting threshold; participants searched less of the display before responding target absent. This finding could have implications for real-world searches with distraction. However, in those experiments, the salient distractor shared critical features with the frequent distractors. In two experiments, we expand on this finding by showing that the pattern of results maintains when a salient distractor does not share critical features with the frequent distractors but reverses when it shares features with the target. The pattern of results is consistent with the salient distractor providing a rapid accumulation of evidence towards its associated boundary in a drift diffusion framework—when it shares features with the target there is a burst of evidence accumulation toward the “present” boundary; when it is a distractor there is a burst of evidence toward the “absent” boundary. We believe this account of the SD’s impact provides a more parsimonious account than a quitting threshold account and can better explain when a salient distractor will harm or help target detection.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 5","pages":"1458 - 1470"},"PeriodicalIF":1.7,"publicationDate":"2025-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12204935/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144310884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Errors in visual search: How can we reduce them?","authors":"Aoqi Li, Jeremy M. Wolfe, Johan Hulleman","doi":"10.3758/s13414-025-03095-6","DOIUrl":"10.3758/s13414-025-03095-6","url":null,"abstract":"<div><p>Observers routinely make errors in almost any visual search task. In previous online experiments, we found that indiscriminately highlighting all item positions in a noisy search display reduced errors. Here, we conducted two eye-tracking studies to investigate the mechanics of this error reduction: Does cueing direct attention to previously overlooked regions or enhance attention/processing at cued locations? Displays were presented twice. In Experiment 1, for half of the displays, the cue was only presented on the first copy (Cue – noCue) and for the other half, only presented on the second copy (noCue – Cue). Cueing successfully reduced errors but did not significantly affect reaction times (RTs). This contrasts with the online experiment where the cue increased RTs while reducing errors. In Experiment 2, we replicated the design of the online experiment by splitting the displays into noCue – noCue and noCue – Cue pairs. We now found that the cue reduced errors, but increased RTs on trials with high-contrast targets. The eye-tracking data show that participants fixated closer to items and fixation durations were shorter in cued displays. The smaller fixation-item distance reduced search errors, where observers never fixated the target, for low-contrast targets and the remaining low-contrast errors seemed to be recognition errors, where observers looked at the target but too quickly looked away. Taken together, these results suggest the main reason that errors were reduced was because attention was more properly directed to overlooked regions by the cues. Enhancement of attention at the cued areas may have played an auxiliary role.</p></div>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":"87 5","pages":"1471 - 1495"},"PeriodicalIF":1.7,"publicationDate":"2025-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12205024/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144295435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}