{"title":"Use of target drift in heading judgments.","authors":"Li Li, Simon K Rushton, Rongrong Chen, Jing Chen","doi":"10.1167/jov.25.7.9","DOIUrl":"10.1167/jov.25.7.9","url":null,"abstract":"<p><p>The change in direction of a target object relative to a translating observer (or a point fixed relative to the observer), \"target drift,\" provides information about the observer's direction of self-movement (i.e., heading) with respect to the target. Relative drift rate (normalized with cues to motion-in-depth) provides information about the observer's absolute direction of heading relative to the surrounding scene. We investigated the utility of target drift by comparing heading judgments with target drift and \"extra-drift\" cues (the cues available in the changing optic array except target drift) in isolation and together during simulated forward translation. Across four experiments, we found that with the target drift cue alone, participants were able to make precise judgments of both nominal and absolute heading (≤1.53°). Judgments were at least as precise with the target drift cue alone as with extra-drift cues alone. The addition of extra-drift cues to the drift cue did not improve precision, and the pattern of reaction times suggests that the two cues are processed independently. We conclude that target drift can be an effective and powerful cue for heading judgments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 7","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12184796/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144334265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mitchell J P van Zuijlen, Yung-Hao Yang, Jan Jaap R van Assen, Shin'ya Nishida
{"title":"Optical material properties affect detection of deformation of non-rigid rotating objects, but only slightly.","authors":"Mitchell J P van Zuijlen, Yung-Hao Yang, Jan Jaap R van Assen, Shin'ya Nishida","doi":"10.1167/jov.25.6.6","DOIUrl":"10.1167/jov.25.6.6","url":null,"abstract":"<p><p>Although rigid three-dimensional (3D) motion perception has been extensively studied, the visual detection of non-rigid 3D motion remains underexplored, particularly with regard to its interactions with material perception. In natural environments with various materials, image movements produced by geometry-dependent optical effects, such as diffuse shadings, specular highlights, and transparent glitters, impose computational challenges for accurately perceiving object deformation. This study examines how optical material properties influence human perception of non-rigid deformations. In a two-interval forced choice task, observers were shown a pair of rigid and non-rigid objects and asked to select the one that appeared more deformed. The object deformation varied across six intensity levels, and the stimuli included four materials (dotted matte, glossy, mirror, and transparent). We found that the material has only a small effect on deformation detection, with the threshold being slightly higher for transparent than other materials. The results remained the same regardless of the viewing angles, light field conditions (Experiment 1), and the deformation type (Experiment 2). These results show the robust capacity of the human visual system to perceive non-rigid object motion in complex natural visual environments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12080736/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144013253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Capacity and architecture of emotional face-ensemble coding.","authors":"Daniel Fitousi","doi":"10.1167/jov.25.6.10","DOIUrl":"10.1167/jov.25.6.10","url":null,"abstract":"<p><p>The ability to process emotion in ensembles of faces is essential for social functioning and survival. This study investigated the efficiency and underlying architecture of this ability in two contrasting tasks: (a) extracting the mean emotion from a set of faces, and (b) visually searching for a single, redundant-target face within an ensemble. I asked whether these tasks rely on similar or distinct processing mechanisms. To address this, I applied the capacity coefficient-a rigorous measure based on the entire response time distribution. In Experiment 1, participants judged the average emotion of face ensembles. In Experiments 2 and 3, participants searched for a predefined emotional target among multiple faces. In both tasks, workload was manipulated by varying the number of faces in the display. Results revealed that ensemble averaging is a super-capacity process that improves with increased workload, while visual search is capacity-limited and impaired by greater workload. These findings suggest that averaging is a preattentive process supported by a coactive, summative architecture, whereas visual search is attention-dependent and governed by a serial or parallel architecture with inhibitory interactions between display items.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12124146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dmitry A Pushin, Davis V Garrad, Connor Kapahi, Andrew E Silva, Pinki Chahal, David G Cory, Mukhit Kulmaganbetov, Iman Salehi, Melanie A Mungalsingh, Taranjit Singh, Benjamin Thompson, Daniel Yu, Dusan Sarenac
{"title":"Characterizing the circularly oriented macular pigment using spatiotemporal sensitivity to structured light entoptic phenomena.","authors":"Dmitry A Pushin, Davis V Garrad, Connor Kapahi, Andrew E Silva, Pinki Chahal, David G Cory, Mukhit Kulmaganbetov, Iman Salehi, Melanie A Mungalsingh, Taranjit Singh, Benjamin Thompson, Daniel Yu, Dusan Sarenac","doi":"10.1167/jov.25.6.11","DOIUrl":"10.1167/jov.25.6.11","url":null,"abstract":"<p><p>To characterize the optical density of circularly oriented macular pigment (MP) in the human retina, as a quantification of macular health, Psychophysical discrimination tests were performed on human subjects using structured light-induced entoptic phenomena. Central exclusions were used to determine the visual extents of stimuli with varying spatiotemporal frequencies. A model was developed to describe the action of circularly oriented MP, and map stimuli to perceived sizes. The experimental results provided validation for the computational model, showing good agreement between measured data and predictions with a Pearson χ2 fit statistic of 0.06. This article contains a description of a new quantification of macular health and the necessary tools for clinical development. The integration of structured light into vision science has led to the development of more selective and versatile entoptic probes of eye health that provide interpretable thresholds of structured light perception. This work develops a model that maps perceptual thresholds of entoptic phenomena to the underlying MP structure that supports its perception. We selectively characterize the circularly oriented MP optical density, rather than the total MP optical density as typically measured. The presented techniques can be applied in novel early diagnostic tests for a variety of diseases related to macular degeneration such as age-related macular degeneration, macular telangiectasia, and pathological myopia. This work both provides insights into the microstructure of the human retina and uncovers a new quantification of macular health.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"11"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12126121/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive focus: Investigating size tuning in visual attention using SSVEP.","authors":"Guangyu Chen, Yasuhiro Hatori, Chia-Huei Tseng, Satoshi Shioiri","doi":"10.1167/jov.25.6.1","DOIUrl":"https://doi.org/10.1167/jov.25.6.1","url":null,"abstract":"<p><p>This study investigates the size tuning of visual spatial attention using steady-state visual evoked potential (SSVEP) to understand how visual attention efficiently adapts and directs to specific spatial extents. Sixteen participants performed a task involving the rapid serial visual presentation of digits of varying sizes while their brain activity was monitored using electroencephalography. The stimuli flickered at different frequencies, and participants detected target digits at specified sizes. Analysis of SSVEP amplitudes and intertrial phase coherence revealed that visual attention exhibited size tuning with the maximum attentional modulation when the attended size matched the stimulus size. A difference of Gaussian function effectively modeled the facilitation around the attended size and inhibition for adjacent sizes. These findings suggest that visual attention can precisely adjust its focus to enhance processing efficiency, aligning with the zoom lens hypothesis. Our SSVEP study provides strong neural evidence underlying the adaptability of visual attention to varying spatial demands.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12054682/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye-movement patterns for perceiving bistable figures.","authors":"Yi-Hsuan Hsu, Chien-Chung Chen","doi":"10.1167/jov.25.6.3","DOIUrl":"https://doi.org/10.1167/jov.25.6.3","url":null,"abstract":"<p><p>Bistable figures can generate two different percepts alternating with each other. It is suggested that eye fixation plays an important role in bistable figure perception because it helps us selectively focus on certain image features. We tested how the shift of percept is related to the eye-fixation pattern and whether inhibition of return (IOR) plays a role in this process. IOR refers to the phenomenon where, after attention remains at the same image location for a period, the inhibition to the mechanisms supporting that location increases. Consequently, visual attention shifts to a new location, and reallocation to the original location is suppressed. We used an eye tracker to record the observers' eye movements during observation of the duck/rabbit figure and the Necker cube while recording their percept reversals. In Experiment 1, we showed there were indeed different eye fixation patterns for different percepts. Also, the fixation shifted across different regions that occurred before the percept reversal. In Experiment 2, we examined the influence of inward bias on the duck/rabbit figure and found that it had a significant effect on the first percept but that this effect diminished over time. In Experiment 3, a mask was added to the attended region to remove the local saliency. This manipulation increased the number of percept reversals and fixation shifts across different regions. That is, the change in local saliency can cause a fixation shift and thus reverse our perception. Our result shows that what we perceive depends on where we look.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12061061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144024480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachel Moreau, Nihan Alp, Alasdair D F Clarke, Erez Freud, Peter J Kohler
{"title":"Visual search efficiency is modulated by symmetry type and texture regularity.","authors":"Rachel Moreau, Nihan Alp, Alasdair D F Clarke, Erez Freud, Peter J Kohler","doi":"10.1167/jov.25.6.7","DOIUrl":"10.1167/jov.25.6.7","url":null,"abstract":"<p><p>More than a century of vision research has identified symmetry as a fundamental cue, which aids the visual system in making inferences about objects and surfaces in natural scenes. Most studies have focused on one type of symmetry, reflection, presented at a single image location. However, the visual system responds strongly to other types of symmetries and to symmetries that are repeated across the image plane to form textures. Here we use a visual search paradigm with arrays of repeating unit cells that contained either reflection or rotation symmetries but were otherwise matched. Participants were asked to report the presence of a target tile without symmetry. When unit cells tile the plane without gaps, they form regular textures. We manipulated texture regularity by introducing jittered gaps between unit cells. This paradigm lets us investigate the effect of symmetry type and texture regularity on visual search efficiency. Based on previous findings suggesting an advantage for reflection in visual processing, we hypothesized that search would be more efficient for reflection than rotation. We further hypothesized that regular textures would be processed more efficiently. We found independent effects of symmetry type and regularity on search efficiency that confirmed both hypotheses: Visual search was more efficient for textures with reflection symmetry and more efficient for regular textures. This provides additional support for the perceptual advantage of reflection in the context of visual search and provides important new evidence in favor of visual mechanisms specialized for processing symmetries in regular textures.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12110545/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143992619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Riga, Stuart Anstis, Ian M Thornton, Patrick Cavanagh
{"title":"Following Randolph Blake's furrow further.","authors":"Anna Riga, Stuart Anstis, Ian M Thornton, Patrick Cavanagh","doi":"10.1167/jov.25.6.9","DOIUrl":"10.1167/jov.25.6.9","url":null,"abstract":"<p><p>In 1992, Randolph Blake, in collaboration with Robert Cormack and Eric Hiris, reported a strong deviation in perceived direction for a target moving over an oblique, static grating. Here we follow up on this effect, subsequently called the furrow illusion, to determine its origin. We find, unlike Cormack et al., that it is influenced by the luminance of the target and that it does not survive smooth pursuit of a moving fixation that stabilizes the target on the retina. We also introduce an inverted version of the furrow stimulus with the static grating visible only within the moving target rather than only around it. This \"peep-hole\" furrow stimulus shows a similar deviation in its direction and is quite similar to the well-known double-drift stimulus (Lisi & Cavanagh, 2015). Like the double-drift but unlike the furrow stimulus, its illusory direction persists when tracking a fixation that moves in tandem with the target. The main source for the illusion in both cases appears to be the terminators where the grating's bars meet the target contour. These terminators move laterally along the target's contour as the target moves vertically and the combination of these two directions creates the illusory oblique motion. However, the loss of the illusion for the tracked furrow stimulus suggests either a contribution from negative afterimages within the target or from induced motion in this case.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12118504/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visuomotor adaptation to constant and varying delays in a target acquisition task.","authors":"Sam Beech, Danaë Stanton Fraser, Iain D Gilchrist","doi":"10.1167/jov.25.6.8","DOIUrl":"10.1167/jov.25.6.8","url":null,"abstract":"<p><p>In visually guided movement tasks, visual feedback delays disrupt visuomotor control and impair performance. Adaptation then occurs as compensatory visuomotor updates are generated to accommodate the delay and recover control. Following the removal of the delay, an after-effect is observed, where the retention of this visuomotor update impairs post-exposure performance relative to the pre-exposure baseline. Although adaptation has previously been explored in response to constant delays, there has been no investigation into how continuously varying delays affect adaptation. In this experiment, participants completed a mouse-based target acquisition task with either a constant or varying delay between the mouse and cursor movements. At first exposure to the delays, completion times were large, and both delay conditions frequently overshot the target. With repeated exposure, the precision of the movements improved, resulting in lower completion times and fewer overshoots. The constant and varying delay conditions showed similar rates of change throughout the exposure phase, suggesting similar adaptation rates. Following the removal of the delay, the two delay conditions demonstrated similar post-exposure after-effects, as they systematically undershot the target and showed a decrease in overshooting relative to the pre-exposure baseline. Despite delay variability imposing an unstable error signal between the expected and actual cursor locations, this did not disrupt adaptation. These results suggest that the participants in the varying delay condition adapted to the mean delay and that the fluctuations away from this value did not disrupt the generation of the visuomotor updates.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12118507/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ken W S Tan, Amritha Stalin, Adela S Y Park, Kristine Dalton, Benjamin Thompson
{"title":"Interleaved periods of exercise do not enhance visual perceptual learning.","authors":"Ken W S Tan, Amritha Stalin, Adela S Y Park, Kristine Dalton, Benjamin Thompson","doi":"10.1167/jov.25.6.5","DOIUrl":"10.1167/jov.25.6.5","url":null,"abstract":"<p><p>Animal models indicate that exercise promotes visual cortex neuroplasticity; however, results from studies that have explored this effect in humans are mixed. A potential explanation for these discrepant results is the relative timing of exercise and the task used to index neuroplasticity. We hypothesized that a close temporal pairing of exercise and training on a vision task would enhance perceptual learning (a measure of neuroplasticity) compared to a non exercise control. Thirty-two participants (mean age = 31 years; range, 20-65; SD = 11.1; 50:50 sex ratio) were randomly assigned to Exercise or Non Exercise groups. The Exercise group alternated between moderate cycling along a virtual course and training on a peripheral crowding task (5 minutes each, 1 hour total intervention), and the Non Exercise group alternated between passive viewing of the virtual cycling course and the vision task. The protocol was repeated across 5 consecutive days. Both groups exhibited reduced visual crowding after 5 days of training. However, there was no difference in perceptual learning magnitude or rate between groups. Translation of the animal exercise and visual cortex neuroplasticity results to humans may depend on a range of factors, such as baseline fitness levels and the measures used to quantify neuroplasticity.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12068527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}