{"title":"Perceptual organization is limited in peripheral vision: Evidence from configural superiority.","authors":"Cathleen M Moore, Qingzi Zheng, Yelda Semizer","doi":"10.1167/jov.25.11.16","DOIUrl":"10.1167/jov.25.11.16","url":null,"abstract":"<p><p>Perceptual organization refers collectively to those processes by which the three-dimensional structure and material properties of surfaces are abstracted from image information. It is a critical foundation of object perception. Examples of perceptual organization include the assignment of relative depth to different contrast regions, the representation of three-dimensional shape based on two-dimensional geometry, and the representation of completed regions of occluded surfaces behind other surfaces. Perceptual organization is typically studied with stimuli at fixation, where visual acuity is high; however, stimuli in the periphery are represented with poor fidelity and may not support those processes. We tested the hypothesis that perceptual organization is limited in peripheral vision by measuring configural superiority effects for four different perceptual organization processes with stimuli in central and peripheral locations. We found configural superiority for stimuli defined by surface completion, three-dimensional shape, transparency/surface scission, and shape from closure for stimuli at fixation, providing evidence that each of these processes occurred for those stimuli. However, when the same stimuli were presented in peripheral locations, but size-scaled to compensate for acuity differences, no configural superiority occurred. This is consistent with those processes having failed for those stimuli. These results suggest that peripheral vision, unlike central vision, is not object based and that it serves a fundamentally different function.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"16"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12492462/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145151473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A unified account of current-future control and affordance-based control for running to catch fly balls.","authors":"Dees B W Postma, Frank T J M Zaal","doi":"10.1167/jov.25.11.9","DOIUrl":"10.1167/jov.25.11.9","url":null,"abstract":"<p><p>Current-future control and affordance-based control offer two distinct approaches to understanding the visual guidance of action. Current-future control strategies, such as the optical acceleration cancellation strategy, are essentially error-nulling strategies. That is, the visual guidance of action is predicated on nulling the error around the critical value of some optical invariant. Whether error nulling is (still) possible, given the action capabilities of the agent, is not specified in current-future control strategies. There is no specification of affordances. Affordance-based control resolves this issue by incorporating action capabilities into the control of action. Although promising, affordance-based control strategies have been criticized for their lack of specific predictions for the control of movement. Affordance-based control strategies are under-constrained in the sense that there is no specific control law that guides action within the space of possibilities. In this contribution, we resolve this issue by showing that current-future control and affordance-based control need not be fundamentally different and that their guiding principles can in fact be reconciled. We propose a new control strategy within the fly-ball paradigm and show its effectiveness through simulations. We show that our model makes clear predictions about the control of interceptive behavior while also informing the fielder about the catchability of fly balls.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"9"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12448125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145042094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye tracking for the classification of visual function in individuals with vision impairment: A validation in Para athletes.","authors":"Ward Nieboer, David L Mann","doi":"10.1167/jov.25.11.5","DOIUrl":"10.1167/jov.25.11.5","url":null,"abstract":"<p><p>Eye tracking has the potential to be used as a meaningful measure of the consequences of vision impairment (VI), yet a comprehensive test battery is lacking. In this study, we sought to evaluate the feasibility and validity of a test battery of eye movements as a tool to measure visual performance in individuals with VI. A test battery including fixation stability, smooth pursuit, saccades, free viewing, and visual search was administered to 46 athletes with VI and 10 control participants. Feasibility was determined by test completion rates. Construct validity was assessed by comparing eye movement outcomes across different VI subgroups, and predictive validity was evaluated by examining the relationship between eye movement metrics and in-competition sport performance. The test battery proved feasible, with 88% of athletes with VI able to complete the tests. Eye movement variables distinguished between subgroups, supporting construct validity. For example, participants with combined central and peripheral vision loss showed longer fixation durations during free viewing and visual search, while those with central vision loss had prolonged saccades during free viewing and fewer and smaller eye movements during visual search. Predictive validity was indicated by significant correlations between eye movement metrics and sport performance, suggesting that eye tracking can predict real-world outcomes. Our findings suggest that an assessment of eye movements provides a feasible, valid, and largely objective measure of the functional consequences of VI that may extend beyond what information is obtained using traditional tests of visual acuity and visual field and that those measures can help to predict the sport performance of Para athletes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145024713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Isaac Meso, Jonathan Vacher, Nikos Gekas, Pascal Mamassian, Laurent U Perrinet, Guillaume S Masson
{"title":"DynTex: A real-time generative model of dynamic naturalistic luminance textures.","authors":"Andrew Isaac Meso, Jonathan Vacher, Nikos Gekas, Pascal Mamassian, Laurent U Perrinet, Guillaume S Masson","doi":"10.1167/jov.25.11.2","DOIUrl":"10.1167/jov.25.11.2","url":null,"abstract":"<p><p>The visual systems of animals work in diverse and constantly changing environments where organism survival requires effective senses. To study the hierarchical brain networks that perform visual information processing, vision scientists require suitable tools, and Motion Clouds (MCs)-a dense mixture of drifting Gabor textons-serve as a versatile solution. Here, we present an open toolbox intended for the bespoke use of MC functions and objects within modeling or experimental psychophysics contexts, including easy integration within Psychtoolbox or PsychoPy environments. The toolbox includes output visualization via a Graphic User Interface. Visualizations of parameter changes in real time give users an intuitive feel for adjustments to texture features like orientation, spatiotemporal frequencies, bandwidth, and speed. Vector calculus tools serve the frame-by-frame autoregressive generation of fully controlled stimuli, and use of the GPU allows this to be done in real time for typical stimulus array sizes. We give illustrative examples of experimental use to highlight the potential with both simple and composite stimuli. The toolbox is developed for, and by, researchers interested in psychophysics, visual neurophysiology, and mathematical and computational models. We argue the case that in all these fields, MCs can bridge the gap between well- parameterized synthetic stimuli like dots or gratings and more complex and less controlled natural videos.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12419482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eva Deligiannis, Marisa Donnelly, Carol Coricelli, Karsten Babin, Kevin M Stubbs, Chelsea Ekstrand, Laurie M Wilcox, Jody C Culham
{"title":"Binocular cues to 3D face structure increase activation in depth-selective visual cortex with negligible effects in face-selective areas.","authors":"Eva Deligiannis, Marisa Donnelly, Carol Coricelli, Karsten Babin, Kevin M Stubbs, Chelsea Ekstrand, Laurie M Wilcox, Jody C Culham","doi":"10.1167/jov.25.11.6","DOIUrl":"10.1167/jov.25.11.6","url":null,"abstract":"<p><p>Studies of visual face processing often use flat images as proxies for real faces due to their ease of manipulation and experimental control. Although flat images capture many features of a face, they lack the rich three-dimensional (3D) structural information available when binocularly viewing real faces (e.g., binocular cues to a long nose). We used functional magnetic resonance imaging to investigate the contribution of naturalistic binocular depth information to univariate activation levels and multivariate activation patterns in depth- and face-selective human brain regions. We used two cameras to capture images of real people from the viewpoints of the two eyes. These images were presented with natural viewing geometry (such that the size, distance, and binocular disparities were comparable to a real face at a typical viewing distance). Participants viewed stereopairs under four conditions: accurate binocular disparity (3D), zero binocular disparity (two-dimensional [2D]), reversed binocular disparity (pseudoscopic 3D), and no binocular disparity (monocular 2D). Although 3D faces (both 3D and pseudoscopic 3D) elicited higher activation levels than 2D faces, as well as distinct activation patterns, in depth-selective occipitoparietal regions (V3A, V3B, IPS0, IPS1, hMT+), face-selective occipitotemporal regions (OFA, FFA, pSTS) showed limited sensitivity to internal facial disparities. These results suggest that 2D images are a reasonable proxy for studying the neural basis of face recognition in face-selective regions, although contributions from 3D structural processing within the dorsal visual stream warrant further consideration.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 11","pages":"6"},"PeriodicalIF":2.3,"publicationDate":"2025-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12429739/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145031090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessia Tonelli, David Burr, Monica Gori, David Alais
{"title":"Continuous tracking of audiovisual motion.","authors":"Alessia Tonelli, David Burr, Monica Gori, David Alais","doi":"10.1167/jov.25.10.10","DOIUrl":"10.1167/jov.25.10.10","url":null,"abstract":"<p><p>Multisensory processing is important for studying and understanding typical and atypical development; however, traditional paradigms involve numerous conditions and trials, making sessions long and tedious. A technique referred to as \"continuous-tracking\" has been introduced which can assess perceptual thresholds in a shorter time. We tested this technique in an audiovisual context by asking participants to track 1-minute audiovisual stimuli moving in a random walk. The stimuli could be visual, auditory, or audiovisual. In the last case, we had a congruent and incongruent condition with a spatiotemporal shift between the two stimuli, so either vision or audition led the walk by a given time. We further modulated the reliability of the visual stimulus to shift the weight toward the audio. We found a straightforward visual dominance regarding motion perception in audiovisual contexts. Regardless of its state, visual information interferes with auditory perception. Moreover, the continuous tracking yielded a new measurement of motion perception, the lag, giving information on the delay between visual and auditory information processing. Indeed, we observed that the tracking of auditory motion lagged relative to visual motion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"10"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12369911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Simona Garobbio, Hanna Zuche, Ursula Hall, Nina L Giudici, Chrysoula Gabrani, Hendrik P N Scholl, Michael H Herzog
{"title":"To what extent do cataracts and cataract surgery change perception?","authors":"Simona Garobbio, Hanna Zuche, Ursula Hall, Nina L Giudici, Chrysoula Gabrani, Hendrik P N Scholl, Michael H Herzog","doi":"10.1167/jov.25.10.13","DOIUrl":"10.1167/jov.25.10.13","url":null,"abstract":"<p><p>Cataract surgery is the most commonly performed surgical procedure worldwide and is typically associated with an improvement in visual acuity (VA). This study aimed to examine how various visual functions, beyond VA and contrast sensitivity, are affected by cataracts and how they change after cataract surgery. We assessed 28 adults (aged 55-85 years) with vision-impairing cataracts using a comprehensive battery of visual tests at four visits: before surgery, 1 week after surgery of the first eye, 1 week after surgery of the second eye, and 1 month after the second surgery. Tests included VA, contrast sensitivity, coherent motion (CMot), orientation discrimination, visual search, and reaction time, assessed monocularly and binocularly. Both a cognitive and a self-assessment questionnaire were administered at the first and last visits. Results indicated that cataracts impaired all visual functions except CMot. Postoperatively, VA, contrast sensitivity, and CMot improved significantly, with marginal gains in orientation discrimination and no change in visual search or reaction times. Improvements were greater after the first surgery. Also, stronger correlations between low-level visual functions, cataract severity, and self-assessment scores were observed for the first operated eye. Cognitive scores correlated significantly with performances in CMot, orientation discrimination, and visual search. These findings suggest that cataracts strongly affect low-level visual processing, whereas higher-level tasks may be maintained through cognitive compensation. Cataract surgery recovers performance in most but not all visual tests, highlighting the importance of considering visual function beyond VA, as well as cognitive functioning, in ophthalmic clinical care.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"13"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12395818/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Temporal attention and oculomotor effects dissociate distinct types of temporal expectation.","authors":"Aysun Duyar, Marisa Carrasco","doi":"10.1167/jov.25.10.3","DOIUrl":"10.1167/jov.25.10.3","url":null,"abstract":"<p><p>Temporal expectation-the ability to predict when events occur-relies on probabilistic information within the environment. Two types of temporal expectation-temporal precision, based on the variability of an event's onset, and hazard rate, based on the increasing probability of an event with onset delay-interact with temporal attention (the ability to prioritize specific moments) at the performance level. Attentional benefits increase with precision but diminish with hazard rate. Both temporal expectation and temporal attention improve fixational stability; however, the distinct oculomotor effects of temporal precision and hazard rate, as well as their interactions with temporal attention, remain unknown. Investigating microsaccade dynamics, we found that hazard-based expectations were reflected in the oculomotor responses, whereas precision-based expectations emerged only when temporal attention was deployed. We also found perception-eye movement dissociations for both types of temporal expectation; yet, attentional benefits in performance coincided with microsaccade rate modulations. These findings reveal an interplay among distinct types of temporal expectation and temporal attention in enhancing and recalibrating fixational stability.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"3"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12338366/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The identification of materials from patterns of fluid flow.","authors":"James T Todd, J Farley Norman","doi":"10.1167/jov.25.10.11","DOIUrl":"https://doi.org/10.1167/jov.25.10.11","url":null,"abstract":"<p><p>The physical interactions among objects in the natural environment can cause dramatic changes in their shapes or patterns of motion, and those changes can provide reliable information to distinguish different types of events or materials. The present research was designed to investigate the identification of fluid materials. Observers viewed computer animations and static images of a shiny orange translucent fluid flowing from a tube into a glass jar, and they were asked to make confidence ratings about whether the depicted material looked like water/juice, oil/paint, honey/molasses, or caulk/toothpaste. The results reveal that observers can identify different types of fluid materials within broad overlapping categories based on qualitative characteristics of fluid flow that only occur within limited ranges of viscosity.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"11"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12395805/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Peripheral overconfidence in a scene categorization task.","authors":"Nino Sharvashidze, Matteo Toscani, Matteo Valsecchi","doi":"10.1167/jov.25.10.2","DOIUrl":"10.1167/jov.25.10.2","url":null,"abstract":"<p><p>Our ability to detect and discriminate stimuli differs across the visual field. Does metaperception (i.e., visual confidence) follow these differences? Evidence is mixed, as studies have reported overconfidence in peripheral detection tasks and underconfidence in a peripheral local orientation discrimination task. Here, we tested whether overconfidence can arise in a task that aligns with the strengths of peripheral vision: rapid scene categorization. In each interval, our participants viewed a scene only in the periphery (scotoma) or only in the center (window) and categorized it (desert, beach, mountain, or forest). Subsequently, they indicated the interval for which they were more confident in their judgment. Task difficulty was manipulated by varying the scotoma and window size. Accuracy decreased with the increasing size of the scotoma and increased with the increasing size of the window. We computed the probability of higher confidence in the periphery as a function of the expected performance difference between the two conditions. Participants' points of equal confidence were systematically shifted toward higher central perceptual performance, indicating that higher visibility in the center was needed to produce matched perceptual confidence and demonstrating overconfidence in the periphery. This suggests that changing the task from local orientation discrimination to global scene categorization (i.e., a task where peripheral vision outperforms foveal vision) reversed the metaperceptual bias. Periphery is suited for detecting objects and processing global information, but not for discriminating fine details or local features. Metacognitive judgments seem to follow these inherent capabilities and constraints of peripheral vision.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"2"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12320901/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144762159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}