Mitchell J P van Zuijlen, Yung-Hao Yang, Jan Jaap R van Assen, Shin'ya Nishida
{"title":"Optical material properties affect detection of deformation of non-rigid rotating objects, but only slightly.","authors":"Mitchell J P van Zuijlen, Yung-Hao Yang, Jan Jaap R van Assen, Shin'ya Nishida","doi":"10.1167/jov.25.6.6","DOIUrl":"10.1167/jov.25.6.6","url":null,"abstract":"<p><p>Although rigid three-dimensional (3D) motion perception has been extensively studied, the visual detection of non-rigid 3D motion remains underexplored, particularly with regard to its interactions with material perception. In natural environments with various materials, image movements produced by geometry-dependent optical effects, such as diffuse shadings, specular highlights, and transparent glitters, impose computational challenges for accurately perceiving object deformation. This study examines how optical material properties influence human perception of non-rigid deformations. In a two-interval forced choice task, observers were shown a pair of rigid and non-rigid objects and asked to select the one that appeared more deformed. The object deformation varied across six intensity levels, and the stimuli included four materials (dotted matte, glossy, mirror, and transparent). We found that the material has only a small effect on deformation detection, with the threshold being slightly higher for transparent than other materials. The results remained the same regardless of the viewing angles, light field conditions (Experiment 1), and the deformation type (Experiment 2). These results show the robust capacity of the human visual system to perceive non-rigid object motion in complex natural visual environments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12080736/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144013253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Capacity and architecture of emotional face-ensemble coding.","authors":"Daniel Fitousi","doi":"10.1167/jov.25.6.10","DOIUrl":"https://doi.org/10.1167/jov.25.6.10","url":null,"abstract":"<p><p>The ability to process emotion in ensembles of faces is essential for social functioning and survival. This study investigated the efficiency and underlying architecture of this ability in two contrasting tasks: (a) extracting the mean emotion from a set of faces, and (b) visually searching for a single, redundant-target face within an ensemble. I asked whether these tasks rely on similar or distinct processing mechanisms. To address this, I applied the capacity coefficient-a rigorous measure based on the entire response time distribution. In Experiment 1, participants judged the average emotion of face ensembles. In Experiments 2 and 3, participants searched for a predefined emotional target among multiple faces. In both tasks, workload was manipulated by varying the number of faces in the display. Results revealed that ensemble averaging is a super-capacity process that improves with increased workload, while visual search is capacity-limited and impaired by greater workload. These findings suggest that averaging is a preattentive process supported by a coactive, summative architecture, whereas visual search is attention-dependent and governed by a serial or parallel architecture with inhibitory interactions between display items.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144152675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dmitry A Pushin, Davis V Garrad, Connor Kapahi, Andrew E Silva, Pinki Chahal, David G Cory, Mukhit Kulmaganbetov, Iman Salehi, Melanie A Mungalsingh, Taranjit Singh, Benjamin Thompson, Daniel Yu, Dusan Sarenac
{"title":"Characterizing the circularly oriented macular pigment using spatiotemporal sensitivity to structured light entoptic phenomena.","authors":"Dmitry A Pushin, Davis V Garrad, Connor Kapahi, Andrew E Silva, Pinki Chahal, David G Cory, Mukhit Kulmaganbetov, Iman Salehi, Melanie A Mungalsingh, Taranjit Singh, Benjamin Thompson, Daniel Yu, Dusan Sarenac","doi":"10.1167/jov.25.6.11","DOIUrl":"https://doi.org/10.1167/jov.25.6.11","url":null,"abstract":"<p><p>To characterize the optical density of circularly oriented macular pigment (MP) in the human retina, as a quantification of macular health, Psychophysical discrimination tests were performed on human subjects using structured light-induced entoptic phenomena. Central exclusions were used to determine the visual extents of stimuli with varying spatiotemporal frequencies. A model was developed to describe the action of circularly oriented MP, and map stimuli to perceived sizes. The experimental results provided validation for the computational model, showing good agreement between measured data and predictions with a Pearson χ2 fit statistic of 0.06. This article contains a description of a new quantification of macular health and the necessary tools for clinical development. The integration of structured light into vision science has led to the development of more selective and versatile entoptic probes of eye health that provide interpretable thresholds of structured light perception. This work develops a model that maps perceptual thresholds of entoptic phenomena to the underlying MP structure that supports its perception. We selectively characterize the circularly oriented MP optical density, rather than the total MP optical density as typically measured. The presented techniques can be applied in novel early diagnostic tests for a variety of diseases related to macular degeneration such as age-related macular degeneration, macular telangiectasia, and pathological myopia. This work both provides insights into the microstructure of the human retina and uncovers a new quantification of macular health.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"11"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive focus: Investigating size tuning in visual attention using SSVEP.","authors":"Guangyu Chen, Yasuhiro Hatori, Chia-Huei Tseng, Satoshi Shioiri","doi":"10.1167/jov.25.6.1","DOIUrl":"https://doi.org/10.1167/jov.25.6.1","url":null,"abstract":"<p><p>This study investigates the size tuning of visual spatial attention using steady-state visual evoked potential (SSVEP) to understand how visual attention efficiently adapts and directs to specific spatial extents. Sixteen participants performed a task involving the rapid serial visual presentation of digits of varying sizes while their brain activity was monitored using electroencephalography. The stimuli flickered at different frequencies, and participants detected target digits at specified sizes. Analysis of SSVEP amplitudes and intertrial phase coherence revealed that visual attention exhibited size tuning with the maximum attentional modulation when the attended size matched the stimulus size. A difference of Gaussian function effectively modeled the facilitation around the attended size and inhibition for adjacent sizes. These findings suggest that visual attention can precisely adjust its focus to enhance processing efficiency, aligning with the zoom lens hypothesis. Our SSVEP study provides strong neural evidence underlying the adaptability of visual attention to varying spatial demands.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12054682/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye-movement patterns for perceiving bistable figures.","authors":"Yi-Hsuan Hsu, Chien-Chung Chen","doi":"10.1167/jov.25.6.3","DOIUrl":"https://doi.org/10.1167/jov.25.6.3","url":null,"abstract":"<p><p>Bistable figures can generate two different percepts alternating with each other. It is suggested that eye fixation plays an important role in bistable figure perception because it helps us selectively focus on certain image features. We tested how the shift of percept is related to the eye-fixation pattern and whether inhibition of return (IOR) plays a role in this process. IOR refers to the phenomenon where, after attention remains at the same image location for a period, the inhibition to the mechanisms supporting that location increases. Consequently, visual attention shifts to a new location, and reallocation to the original location is suppressed. We used an eye tracker to record the observers' eye movements during observation of the duck/rabbit figure and the Necker cube while recording their percept reversals. In Experiment 1, we showed there were indeed different eye fixation patterns for different percepts. Also, the fixation shifted across different regions that occurred before the percept reversal. In Experiment 2, we examined the influence of inward bias on the duck/rabbit figure and found that it had a significant effect on the first percept but that this effect diminished over time. In Experiment 3, a mask was added to the attended region to remove the local saliency. This manipulation increased the number of percept reversals and fixation shifts across different regions. That is, the change in local saliency can cause a fixation shift and thus reverse our perception. Our result shows that what we perceive depends on where we look.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12061061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144024480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rachel Moreau, Nihan Alp, Alasdair D F Clarke, Erez Freud, Peter J Kohler
{"title":"Visual search efficiency is modulated by symmetry type and texture regularity.","authors":"Rachel Moreau, Nihan Alp, Alasdair D F Clarke, Erez Freud, Peter J Kohler","doi":"10.1167/jov.25.6.7","DOIUrl":"10.1167/jov.25.6.7","url":null,"abstract":"<p><p>More than a century of vision research has identified symmetry as a fundamental cue, which aids the visual system in making inferences about objects and surfaces in natural scenes. Most studies have focused on one type of symmetry, reflection, presented at a single image location. However, the visual system responds strongly to other types of symmetries and to symmetries that are repeated across the image plane to form textures. Here we use a visual search paradigm with arrays of repeating unit cells that contained either reflection or rotation symmetries but were otherwise matched. Participants were asked to report the presence of a target tile without symmetry. When unit cells tile the plane without gaps, they form regular textures. We manipulated texture regularity by introducing jittered gaps between unit cells. This paradigm lets us investigate the effect of symmetry type and texture regularity on visual search efficiency. Based on previous findings suggesting an advantage for reflection in visual processing, we hypothesized that search would be more efficient for reflection than rotation. We further hypothesized that regular textures would be processed more efficiently. We found independent effects of symmetry type and regularity on search efficiency that confirmed both hypotheses: Visual search was more efficient for textures with reflection symmetry and more efficient for regular textures. This provides additional support for the perceptual advantage of reflection in the context of visual search and provides important new evidence in favor of visual mechanisms specialized for processing symmetries in regular textures.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12110545/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143992619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna Riga, Stuart Anstis, Ian M Thornton, Patrick Cavanagh
{"title":"Following Randolph Blake's furrow further.","authors":"Anna Riga, Stuart Anstis, Ian M Thornton, Patrick Cavanagh","doi":"10.1167/jov.25.6.9","DOIUrl":"10.1167/jov.25.6.9","url":null,"abstract":"<p><p>In 1992, Randolph Blake, in collaboration with Robert Cormack and Eric Hiris, reported a strong deviation in perceived direction for a target moving over an oblique, static grating. Here we follow up on this effect, subsequently called the furrow illusion, to determine its origin. We find, unlike Cormack et al., that it is influenced by the luminance of the target and that it does not survive smooth pursuit of a moving fixation that stabilizes the target on the retina. We also introduce an inverted version of the furrow stimulus with the static grating visible only within the moving target rather than only around it. This \"peep-hole\" furrow stimulus shows a similar deviation in its direction and is quite similar to the well-known double-drift stimulus (Lisi & Cavanagh, 2015). Like the double-drift but unlike the furrow stimulus, its illusory direction persists when tracking a fixation that moves in tandem with the target. The main source for the illusion in both cases appears to be the terminators where the grating's bars meet the target contour. These terminators move laterally along the target's contour as the target moves vertically and the combination of these two directions creates the illusory oblique motion. However, the loss of the illusion for the tracked furrow stimulus suggests either a contribution from negative afterimages within the target or from induced motion in this case.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144129455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The effect of illumination cues on color constancy in simultaneous identification of illumination and reflectance changes.","authors":"Lari S Virtanen, Maria Olkkonen, Toni P Saarela","doi":"10.1167/jov.25.6.4","DOIUrl":"https://doi.org/10.1167/jov.25.6.4","url":null,"abstract":"<p><p>To provide a stable percept of the surface color of objects, the visual system needs to account for variation in illumination chromaticity. This ability is called color constancy. The details of how the visual system disambiguates effects of illumination and reflectance on the light reaching the eye are still unclear. Here we asked how independent illumination and reflectance judgments are of each other, whether color constancy depends on explicitly identifying the illumination chromaticity, and what kinds of contextual cues support this identification. We studied the simultaneous identification of illumination and reflectance changes with realistically rendered, abstract three-dimensional scenes. Observers were tasked to identify both of these changes between sequentially presented stimuli. The stimuli included a central object whose reflectance could vary and a background that only varied due to changes in illumination chromaticity. We manipulated the visual cues available in the background: local contrast and specular highlights. We found that identification of illumination and reflectance changes was not independent: While reflectance changes were rarely misidentified as illumination changes, illumination changes clearly biased reflectance judgments. However, correct identification of reflectance changes was also not fully dependent on correctly identifying the illumination change: Only when there was no illumination change in the stimulus did it lead to better color constancy, that is, correctly identifying the reflectance change. Discriminability of illumination changes did not vary based on available visual cues of local contrast or specular highlights. Yet discriminability of reflectance changes was improved with local contrast and, to a lesser extent, with specular highlights, in the stimulus. We conclude that a failure of color constancy does not depend on a failure to identify illumination changes, but additional visual cues still improve color constancy through better disambiguation of illumination and reflectance changes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"4"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12063707/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approaches to understanding natural behavior.","authors":"Alexander Goettker, Nathaniel Powell, Mary Hayhoe","doi":"10.1167/jov.25.6.12","DOIUrl":"https://doi.org/10.1167/jov.25.6.12","url":null,"abstract":"<p><p>Many important questions cannot be addressed without considering vision in its natural context. How can we do this in a controlled and systematic way, given the intrinsic diversity and complexities of natural behavior? We argue that an important step is to start with better measurements of natural visually guided behavior to describe the visual input and behavior shown in these contexts more precisely. We suggest that, to go from pure description to understanding, diverse behaviors can be treated as a sequence of decisions, where humans need to make good action choices in the context of an uncertain world state, varying behavior goals, and noisy actions. Because natural behavior evolves in time over sequences of actions, these decisions involve both short- and long-term memory and planning. This strategy allows us to design experiments to capture these critical aspects while preserving experimental control. Other strategies involve progressive simplification of the experimental conditions, and leveraging individual differences, and we provide some examples of successful approaches. Thus, this article charts a path forward for developing paradigms for the systematic investigation of natural behavior.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144183088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ken W S Tan, Amritha Stalin, Adela S Y Park, Kristine Dalton, Benjamin Thompson
{"title":"Interleaved periods of exercise do not enhance visual perceptual learning.","authors":"Ken W S Tan, Amritha Stalin, Adela S Y Park, Kristine Dalton, Benjamin Thompson","doi":"10.1167/jov.25.6.5","DOIUrl":"10.1167/jov.25.6.5","url":null,"abstract":"<p><p>Animal models indicate that exercise promotes visual cortex neuroplasticity; however, results from studies that have explored this effect in humans are mixed. A potential explanation for these discrepant results is the relative timing of exercise and the task used to index neuroplasticity. We hypothesized that a close temporal pairing of exercise and training on a vision task would enhance perceptual learning (a measure of neuroplasticity) compared to a non exercise control. Thirty-two participants (mean age = 31 years; range, 20-65; SD = 11.1; 50:50 sex ratio) were randomly assigned to Exercise or Non Exercise groups. The Exercise group alternated between moderate cycling along a virtual course and training on a peripheral crowding task (5 minutes each, 1 hour total intervention), and the Non Exercise group alternated between passive viewing of the virtual cycling course and the vision task. The protocol was repeated across 5 consecutive days. Both groups exhibited reduced visual crowding after 5 days of training. However, there was no difference in perceptual learning magnitude or rate between groups. Translation of the animal exercise and visual cortex neuroplasticity results to humans may depend on a range of factors, such as baseline fitness levels and the measures used to quantify neuroplasticity.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 6","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12068527/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}