{"title":"Pouring, scooping, bouncing, rolling, twisting, and rotating: Does spontaneous categorical perception of dynamic event types reflect verbal encoding or visual processing?","authors":"Huichao Ji, Brian J Scholl","doi":"10.3758/s13414-025-03141-3","DOIUrl":"https://doi.org/10.3758/s13414-025-03141-3","url":null,"abstract":"<p><p>What we see encompasses not only lower-level properties (such as a ball's shape or motion) but also categorical events (such as a ball bouncing vs. rolling). Recent work demonstrates that such categorical perception occurs spontaneously during passive scene viewing: observers are better able to identify changes in static or dynamic scenes when the change involves different \"visual verbs\" (e.g., pouring vs. scooping), even when the within-type changes (e.g., across two different scenes of pouring) are objectively greater in magnitude. Might this occur as a part of visual processing itself, even without explicit verbal encoding? To find out, we discouraged verbal labeling via explicit instructions, a concurrent verbal suppression task, or both. In all cases, we continued to observe robust cross-event-type advantages for change detection, while carefully controlling lower-level visual features-in contrasts including pouring versus scooping, bouncing versus rolling, and rotating versus twisting. This suggests that we spontaneously see the world in terms of different \"visual verbs\" even without explicit verbal labeling.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145193973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brett Bahle, Kurt Winsler, John E Kiat, Steven J Luck
{"title":"Combined conceptual and perceptual control of visual attention in search for real-world objects.","authors":"Brett Bahle, Kurt Winsler, John E Kiat, Steven J Luck","doi":"10.3758/s13414-025-03116-4","DOIUrl":"https://doi.org/10.3758/s13414-025-03116-4","url":null,"abstract":"<p><p>When we search for an object in the natural visual environment, we sometimes know exactly what the object looks like. At other times, however, we know only the category of the object. For example, if we are looking for our own bath towel, we might know that it is brown and is folded into a rectangle. However, if we are looking for a towel in a friend's house, we might not know its color or whether it is folded or lying in a clump. Consequently, we may sometimes be able to use specific perceptual features to guide search, but some search tasks are so conceptual in nature that the relevant visual features are difficult to specify. Here, we found that eye-movement patterns during visual search could be predicted by perceptual dimensions derived from crowd-sourced data (THINGS), but only when observers had previously seen the specific target object. When only the category of the desired object was known (because the observer had never seen the specific target), eye-movement patterns were predicted by conceptual dimensions derived from a natural language processing model (ConceptNet), and perceptual features had no significant predictive ability once the conceptual information was statistically controlled. In addition, as observers gained experience searching for a specific exemplar of a category, they became progressively more reliant on perceptual features and less reliant on conceptual features. Together, these findings provide novel evidence that conceptual information can influence search, especially when the precise perceptual features of an object are unknown.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Shape and word parts combine linearly in the Bouba-Kiki effect.","authors":"Ananya Passi, S P Arun","doi":"10.3758/s13414-025-03151-1","DOIUrl":"https://doi.org/10.3758/s13414-025-03151-1","url":null,"abstract":"<p><p>Languages have evolved in part due to cross-modal associations between shape and sound. A famous example is the Bouba-Kiki effect, wherein humans associate words like bouba/kiki to round/angular shapes. How does the Bouba-Kiki effect work for natural words and shapes that contain a mixture of features? If the effect is holistic, the effect for a composite stimulus would not be explainable using the parts. If the effect is compositional, it will be. Here we provide evidence for the latter possibility. In Experiments 1 and 2, we standardized bouba-like and kiki-like shapes and words for use in subsequent experiments. In Experiments 3-5, we created composite shapes/words by combining bouba-like & kiki-like parts. In all experiments, the Bouba-Kiki effect strength for composite shapes/words was predicted remarkably well as a linear sum of the contributions of the constituent parts. Our results greatly simplify our understanding of the Bouba-Kiki effect, leaving little room for holism.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Second-order facial features are processed analytically in composite faces.","authors":"Xue Jun Cheng, Daniel R Little","doi":"10.3758/s13414-025-03144-0","DOIUrl":"https://doi.org/10.3758/s13414-025-03144-0","url":null,"abstract":"<p><p>In contrast to claims of holistic processing, upright aligned composite face morphs were recently shown to be processed in the same manner as inverted or misaligned composite face morphs (Cheng et al. 2018. Journal of Experimental Psychology: Learning, Memory and Cognition, 44, 833-862). In the present paper, we replicate that work, using a set of schematic faces which vary second-order features (e.g., lip height and eye separation) in the top and bottom halves of the schematic face. We find that the present stimuli show the hallmarks of holistic processing in a complete composite face task, but differ from composite face morphs in that the best fitting MDS metric is more commensurate with an assumption of integrality (i.e., Euclidean distance). Nevertheless, we also find that, as with morph faces, the processing of upright aligned and upright misaligned faces is consistent with a mixture of serial and parallel processing. Importantly, we found little evidence of any strong holistic pooling of the top and bottom face halves into a single object. These results remain consistent with the idea that composite faces are not processed differently from other objects with separable dimensions but instead that composite faces allow more parallel processing when aligned than when misaligned. Data and code are available from: http://github.com/knowlabUnimelb/SCHEMATICFACERULES .</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Steering in the dark: The impact of environmental luminance on driver behavior through optical flow analysis.","authors":"Jie Wang, Jiangtong Li, Yi Xiao, Kang Song","doi":"10.3758/s13414-025-03146-y","DOIUrl":"https://doi.org/10.3758/s13414-025-03146-y","url":null,"abstract":"<p><p>The visual perception and steering behavior of drivers are known to be influenced by environmental lighting, but the underlying perception mechanisms, particularly the role of optical flow under low-luminance conditions, remain insufficiently understood. In a simulated driving experiment, 32 participants were exposed to five controlled luminance levels while their eye movements and driving performance were recorded. The results indicate that lower environmental luminance leads to prolonged gaze duration, a wider distribution of gaze points, and an increase in lateral steering errors. At moderate luminance, drivers exhibited enhanced optical flow perception and improved steering accuracy. However, under low luminance, degraded optical flow weakened the coupling between gaze and self-motion, caused a misalignment between gaze and vehicle motion, leading to reduced steering accuracy. These findings advance previous work by demonstrating that luminance not only affects gaze behavior but also modulates visual perception through its impact on optical flow processing. These insights may support the development of adaptive driver training programs and human-centered driver assistance systems that respond to perceptual challenges in low-luminance environments.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction to: Do \"auditory\" and \"visual\" time really feel the same? Effects of stimulus modality on duration and passage-of-time judgements.","authors":"Daniel Bratzke","doi":"10.3758/s13414-025-03153-z","DOIUrl":"https://doi.org/10.3758/s13414-025-03153-z","url":null,"abstract":"","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew D Langley, Madelaine T Vu, Michael K McBeath
{"title":"When right side up is upside down: Vertical attention bias tracks interactive feature regularities in upright and inverted images.","authors":"Matthew D Langley, Madelaine T Vu, Michael K McBeath","doi":"10.3758/s13414-025-03148-w","DOIUrl":"https://doi.org/10.3758/s13414-025-03148-w","url":null,"abstract":"<p><p>We previously proposed a Vertical Attention Bias (VAB) that directs attention toward object tops and scene bottoms and robustly confirmed this effect in both adults and 4- to 7-year-old children. Our past findings are consistent with progressive ecological theory, and support that our perceptual biases are coupled to informative environmental regularities. This leads observers to generally favor a downward gaze to facilitate attending more to functionally and behaviorally relevant locations. Here, we examine orientation effects using upright or inverted images presented in triptych sets to further test the overall VAB pattern. Participants made similarity judgments between a central target image of an object or scene and flanking images containing either the same top-half or the same bottom-half as the target image. Experiment 1 presented upright triptych images and replicated past VAB findings. Experiment 2 presented the same triptychs in an inverted orientation. In this context, the environmental regularity of interactive feature placement is incongruent with conventional spatial location in the presented image. Here object and scene tops are positioned in the lower image portion, and bottoms in the upper image portion. Results extend previous findings and confirm that VAB effects favoring object tops and scene bottoms flip along with the inverted image, though statistically weaker. Taken together, the findings support that the typical vertical interactive feature imbalance in real-world stimuli drives a generic downward vantage tendency. This directs attention toward the locations of meaningful, behaviorally relevant environmental aspects, which helps focus attention on personal action space and body-level affordances.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Retrieval from long-term memory does not bypass working memory.","authors":"Michael K P Mugno, Timothy J Vickery","doi":"10.3758/s13414-025-03145-z","DOIUrl":"https://doi.org/10.3758/s13414-025-03145-z","url":null,"abstract":"<p><p>Information retrieved from long-term memory (LTM) enters working memory (WM), and the amount of information that can be retrieved is constrained to the limits of WM (about three to four items; Fukuda & Woodman, Proceedings of the National Academy of Sciences, 114 (20), 5306-5311, 2017). Can LTM retrieval occur when WM is near capacity, without consequence to either the retrieved or the maintained information? Liu, Li, Theeuwes, and Wang (NeuroImage, 261: 119513, 2022) presented evidence that even when WM is near capacity, LTM items could still be reported. They argue that retrieved LTM items can bypass WM. We investigated this further by introducing continuous reporting of retrieved information and WM contents to their paradigm. If retrieval bypasses WM, then there should be no impairment of report accuracy to either WM contents or LTM-retrieved information. In the first experiment, WM reports suffered when an LTM item was retrieved. In the second, we found that when WM was near capacity (four items), the fidelity of LTM reports suffered compared to when WM was not (two items or no items). Additionally, WM contents were reported with lower fidelity when an LTM item was retrieved compared to a WM-only condition, under both two-item and four-item WM load. We conclude that LTM retrieval does not bypass WM.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clara Carrez-Corral, Carole Peyrin, Pauline Rossel, Louise Kauffmann
{"title":"Effects of predictions robustness and object-based predictions on subjective visual perception.","authors":"Clara Carrez-Corral, Carole Peyrin, Pauline Rossel, Louise Kauffmann","doi":"10.3758/s13414-025-03150-2","DOIUrl":"https://doi.org/10.3758/s13414-025-03150-2","url":null,"abstract":"<p><p>Learned regularities about contextual associations between objects and scenes allow us to form predictions about the likely features of the environment, facilitating perception of noisy visual inputs. Studies have shown that blurred objects that can be predicted based on their scene context appear subjectively sharper than the same objects that cannot. Experiment 1 addressed whether this effect could be modulated by the robustness of context-based predictions. Participants performed a blur-matching task between two images, each depicting a blurred object in context. They had to adjust the blur level of the right object to match that of the left object (Target). Robustness of context-based predictions was manipulated via phase-coherence alteration in scene contexts. Results showed that robustly predicted objects were subjectively perceived as sharper than less predictable objects when the Target object was noisy. Experiment 2 addressed whether object-based predictions also sharpen the perception of scene contexts. Participants performed a blur-matching task between two scenes and had to adjust the blur level of the right scene context to match that of the left one (Target). One scene contained an intact object (predictable context), while the other had a phase-scrambled object (unpredictable context). Results showed that at objectively equal blur levels participants perceived predictable scenes as sharper than unpredictable ones, again only when the Target scene was noisy. These results suggest that perceptual sharpening mainly occurs when the visual signal is noisy and predictions are robust enough to disambiguate it, and reveal reciprocal influences between context- and object-based predictions in shaping visual perception.</p>","PeriodicalId":55433,"journal":{"name":"Attention Perception & Psychophysics","volume":" ","pages":""},"PeriodicalIF":1.7,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144979055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}