Jacob R Cheeseman, James A Ferwerda, Takuma Morimoto, Roland W Fleming
{"title":"Gloss discrimination: Toward an image-based perceptual model.","authors":"Jacob R Cheeseman, James A Ferwerda, Takuma Morimoto, Roland W Fleming","doi":"10.1167/jov.25.10.6","DOIUrl":"10.1167/jov.25.10.6","url":null,"abstract":"<p><p>Gloss is typically considered the perceptual counterpart of a surface's reflectance characteristics. Yet, asking how discriminable two surfaces are on the basis of surface properties is a poorly posed question, as scene factors other than reflectance can have substantial effects on how discriminable two glossy surfaces are to humans. This difficulty with predicting gloss discrimination has so far hobbled efforts to establish a perceptual standard for surface gloss. Here, we propose an experimental framework for making this problem tractable, starting from the premise that any perceptual standard of gloss discrimination must account for how distal scene variables influence the statistics of proximal image data. With this goal in mind, we rendered a large set of images in which shape, illumination, viewpoint, and surface roughness were varied. For each combination of viewing conditions, a fixed difference in surface roughness was used to create a pair of images showing the same object (from the same viewpoint and under the same lighting) with high and low gloss. Human participants (N = 150) completed a paired comparisons task in which they were required to select image pairs with the largest apparent gloss difference. Importantly, rankings of the scenes derived from these judgments represent differences in perceived gloss independent of physical reflectance. We find that these rankings are remarkably consistent across participants, and are well-predicted by a straightforward Visual Differences Predictor (Daly, 1992; Mantiuk, Hammou, & Hanji, 2023). This allows us to estimate bounds on visual discriminability for a given surface across a wide range of viewing conditions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"6"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12352513/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144818094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The role of scene context in object recognition: Timing and mechanisms.","authors":"Mingjie Gao, Jan Drewes, Weina Zhu","doi":"10.1167/jov.25.10.4","DOIUrl":"10.1167/jov.25.10.4","url":null,"abstract":"<p><p>One remarkable aspect of human perception is the ability to extract the meaning of a target within a complex scene quickly. Consistent scenes between targets and backgrounds enhance visual understanding, yet the precise timing and underlying mechanisms of this effect remain unclear. To address this, two experiments were conducted exploring how object category, scene orientation, and scene consistency influence object recognition under conscious (Experiment 1) and unconscious (Experiment 2) conditions. Both experiments revealed an animate advantage, facilitation by scene consistency, and orientation-specific enhancements in object recognition. Performance was improved when both the object and the scene were upright, and furniture targets were more affected by scene consistency than animal targets. Specifically, under the conscious condition, the animate advantage was observed only for inconsistent scenes, whereas under the unconscious condition, the animate advantage was not influenced by scene consistency. Interestingly, in the unconscious state, the effects of target category and background orientation depended on scene consistency, with animals consistent with the background and furniture inconsistent with the background both influenced by orientation. These results suggest that scene context influences object recognition in the early stage of visual processing, and furniture recognition is more sensitive to contextual regularities than animal recognition.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"4"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12347156/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144795930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hayden Schill Hendley, Natalia K Pallis Hassani, Timothy F Brady
{"title":"Ensemble perception of faces with naturalistic occlusions.","authors":"Hayden Schill Hendley, Natalia K Pallis Hassani, Timothy F Brady","doi":"10.1167/jov.25.10.5","DOIUrl":"10.1167/jov.25.10.5","url":null,"abstract":"<p><p>The visual system takes advantage of redundancy in the world by extracting summary statistics, a phenomenon known as ensemble perception. Ensemble representations are formed for low-level features like orientation and size and high-level features such as facial identity and expression. Whereas recent research has shown that the visual system forms intact ensemble representations even when faces are partially occluded via solid bars, how ensemble perception is impacted with the addition of naturalistic objects such as face masks or sunglasses is largely unknown. To investigate this, we conducted a series of experiments using continuous report tasks in which faces (either varying in identity or expression) were partially occluded with a surgical mask or sunglasses and participants had to report the average face using a face wheel. We found evidence that participants could still accurately extract the average even when a significant portion of it was occluded with either face masks or sunglasses. In a second experiment, however, we found performance was worse when the face wheel was variable trial to trial. Thus part of the preservation of performance in occlusion arises from the visual system learning the features of the particular face wheel being used. Overall, our results suggest that the visual system is able to establish robust ensemble representations for faces with naturalistic occlusions, but that robustness appears to be supported at least partially by learning information about the particular features that are informative for a given set of faces.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"5"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12347214/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144800799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Where do I go? Decoding temporal neural dynamics of scene processing and visuospatial memory interactions using convolutional neural networks.","authors":"Clément Naveilhan, Raphaël Zory, Stephen Ramanoël","doi":"10.1167/jov.25.10.15","DOIUrl":"https://doi.org/10.1167/jov.25.10.15","url":null,"abstract":"<p><p>Visual scene perception enables rapid interpretation of the surrounding environment by integrating multiple visual features related to task demands and context, which is essential for goal-directed behavior. In the present work, we investigated the temporal neural dynamics underlying the interaction between the processing of bottom-up visual features and top-down contextual knowledge during scene perception. We asked whether newly acquired spatial knowledge would immediately modulate the early neural responses involved in the extraction of navigational affordances available (i.e., the number of open doors). For this purpose, we analyzed electroencephalographic data from 30 participants performing interleaved blocks of a scene memory task and a visuospatial memory task in which we manipulated the number of navigational affordances available. We used convolutional neural networks coupled with gradient-weighted class activation mapping to assess the main electroencephalographic channels and time points contributing to the classification performances. The results indicated an early temporal window of integration in occipitoparietal activity (50-250 ms post stimulus) for several aspects of visual perception, including scene color and number of affordances, as well as for spatial memory content. Moreover, a convolutional neural network trained to detect affordances in the scene memory task failed to generalize to detect the same affordances after participants learned spatial information about goal position within the scene. Taken together, these results reveal an early common window of integration for scene and visuospatial memory information, with a specific and immediate top-down influence of newly acquired spatial knowledge on early neural correlates of scene perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"15"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12400970/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The asymmetry system properties of the visual system: Evidence from serial face processing.","authors":"Jun-Ming Yu, Haojiang Ying","doi":"10.1167/jov.25.10.12","DOIUrl":"https://doi.org/10.1167/jov.25.10.12","url":null,"abstract":"<p><p>Vision can be viewed as a continuous information processing, yet its underlying system properties have not been fully understood. Studies of visual serial dependence suggest that current perception is often biased by the preceding stimuli, raising the possibility of Markov-like processing-where only the previous state (not the ones before) affects the current one. In the current study, participants rated faces on two of three traits (attractiveness, trustworthiness, and dominance), presented in randomized sequences so each rating could be preceded by the same or a different trait. This design allowed us to examine how prior input (the face) and prior output (the perception) influence current judgment. Using derivative of Gaussian, Markov chain, and linear mixed-effects modeling, we found that serial dependence was disrupted-and both memoryless property and Markov assumptions were violated-when alternating between two traits for attractiveness and dominance but not under other conditions. These findings suggest that different facets of (presumably) the same visual computation can exhibit asymmetrical system properties. More broadly, our work shows how serial dependence can serve as a powerful tool to probe the underlying rules by which the visual system integrates past and present information.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"12"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12395802/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Symmetrical and asymmetrical distortions in time and numerosity perception induced by chunked stimuli.","authors":"Zhichao Xue, Xiangyong Yuan, Yi Jiang","doi":"10.1167/jov.25.10.1","DOIUrl":"10.1167/jov.25.10.1","url":null,"abstract":"<p><p>The (co)representation of time and numerosity has long been a topic of enduring interest. While a theory of magnitude (ATOM) posits that these dimensions are governed by a shared representational system, empirical findings offer both supporting and conflicting evidence. Previous challenging research has highlighted that time and numerosity perception can be distorted in opposite directions by explicitly introducing emotional or cognitive interference. However, it remains unclear whether time and numerosity can spontaneously dissociate during stimulus processing. To this end, we tested the time and numerosity distortions caused by different kinds of chunked stimuli, including collinearity, illusory contours (ICs), and biological motion (BM). The results showed that collinearity caused the same amount of overestimation for both time and numerosity, whereas ICs caused only numerosity underestimation and BM caused only time overestimation. Notably, no consistent correlations emerged between the magnitudes of temporal and numerical distortion across the three stimulus types. These findings suggest that time and numerosity perception can be symmetrically or asymmetrically distorted depending on the nature of chunked stimuli, providing converging evidence for partially dissociable representations of time and numerosity. The close relationship observed between these two dimensions may instead reflect shared constraints within a broader framework of information processing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"1"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12327536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144762160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatemeh Basim, Erin Goddard, Yueran Yang, Michael A Webster
{"title":"Color contrast adaptation and compensation in color deficiencies.","authors":"Fatemeh Basim, Erin Goddard, Yueran Yang, Michael A Webster","doi":"10.1167/jov.25.10.17","DOIUrl":"10.1167/jov.25.10.17","url":null,"abstract":"<p><p>Anomalous trichromacy (AT) results from a reduced spectral separation between the L and M cone photopigments. This leads to smaller differential responses in the L and M cones and thus lower sensitivity to the colors signaled by the LvsM difference. Despite this, for stimuli above threshold, many color-anomalous observers report color experiences that resemble those of color-normal individuals, suggesting some form of perceptual compensation for their sensitivity losses. The nature and sites of this compensation remain uncertain, and may reflect many levels, from early sensory mechanisms to later post-perceptual processes. To assess potential sensory-level compensation, we compared the aftereffects of adaptation to chromatic contrast in 15 color-normal and 15 color-anomalous observers (10 deutan, 5 protan). Without compensation, the same adapting contrast should produce weaker adaptation effects in anomalous observers, because the same physical adaptor is a lower multiple of their threshold sensitivity. We quantified this prediction in color-normals by rescaling the LvsM contrasts to simulate the sensitivity losses. Although protan observers showed mixed results, the deutan observers exhibited adaptation effects that exceeded the predictions based on their threshold sensitivities, indicating partial compensation for the reduced LvsM signals. These findings are consistent with a post-receptoral sensory gain in contrast processing that compensates for the weaker LvsM cone signals available to anomalous observers, and could reflect a general normalization of contrast coding to match the color gamut of the observer's environment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"17"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12400990/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"No evidence for a cortical origin of pupil constriction responses to isoluminant stimuli.","authors":"Vasilii Marshev, Haley G Frey, Jan Brascamp","doi":"10.1167/jov.25.10.7","DOIUrl":"10.1167/jov.25.10.7","url":null,"abstract":"<p><p>The pupil constricts in response to visual stimuli that keep net luminance unchanged but that do introduce local luminance increments and decrements-a reaction here called \"isoluminant constriction.\" This response can form a pupillometric index of visual processing, but it is unclear what kind of processing it reflects; some authors have suggested that the constriction arises from subcortical, luminance-based neural signals, whereas others have argued for an origin at cortical, feature-based processing stages. We tested the involvement of cortical neural activity in isoluminant constrictions. To this end, we measured constrictions to stimuli presented after contrast adaptation, an adaptation procedure thought to lessen cortical stimulus responses. If cortical processing is involved in the isoluminant constriction, then such adaptation should lead to reduced isoluminant constriction amplitudes. We tested this prediction in the course of three experiments. We found no evidence for the prediction in any of the experiments, and did find Bayesian evidence against the prediction. These results suggest that, at least in the conditions of our experiments, isoluminant constrictions may not reflect visual cortical processing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"7"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12364009/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Facial color matching in optical see-through augmented reality.","authors":"Yanmei He, Christopher A Thorstenson","doi":"10.1167/jov.25.10.16","DOIUrl":"10.1167/jov.25.10.16","url":null,"abstract":"<p><p>Augmented reality (AR) aims to combine elements of the surrounding environment with additional virtual content into a combined viewing scene. Displaying virtual human faces is a widespread practical application of AR technology, which can be challenging in optical see-through AR (OST-AR) because of limitations in its color reproduction. Specifically, OST-AR's additive optical blending introduces transparency and color-bleeding, which is exacerbated especially for faces having darker skin tones, and for brighter and more chromatic ambient environments. Given the increasing prevalence of social AR applications, it is essential to better understand how facial color reproduction is impacted by skin tone and ambient lighting in OST-AR. In this study, a psychophysical experiment was conducted to investigate how participants adjusted colorimetric dimensions of OST-AR-displayed faces to match the color of the same faces viewed on a conventional emissive display. These adjustments were made for faces having six different skin tones, while under different simulated ambient luminance (\"low\" vs. \"high\") and chromaticity (warm, neutral, cool). Additionally, participants rated their adjustments for overall appearance match and preference. The results indicate that the magnitude and specific dimensions of colorimetric adjustments needed to make matches varied across skin tones and ambient conditions. The current work is expected to facilitate virtual human face reproduction in AR applications and to foster more equitable and immersive extended reality environments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"16"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12400977/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Increases versus decreases: Asymmetric effects of contrast changes during binocular rivalry modulated by awareness of perceptual switch.","authors":"Changzhi Huang, Rong Jiang, Ming Meng","doi":"10.1167/jov.25.10.14","DOIUrl":"https://doi.org/10.1167/jov.25.10.14","url":null,"abstract":"<p><p>The human visual system prioritizes dynamic stimuli, which attract attention and more readily break suppression to reach perceptual awareness. Here, we investigated whether dynamic changes in contrast-either increasing or decreasing-are equally effective in facilitating the breakthrough of suppressed stimuli during binocular rivalry. In Experiment 1a, we found that contrast increases led to significantly faster breakthroughs into perceptual dominance compared with decreases. Notably, increases accelerated breakthrough relative to the unchanged baseline, whereas decreases delayed it. Experiments 1b and 1c replicated the results of Experiment 1a using, respectively, a briefer contrast change (10 ms instead of 100 ms) and partial breakthrough reports, confirming a robust asymmetry in the processing of suppressed stimuli between increases and decreases. In Experiment 2a, random dots moving in different random directions were presented dichoptically, making interocular conflict imperceptible and unreportable. We found that any change in intensity in such rivalry settings-regardless of increase or decrease-promoted perceptual dominance. By introducing motion stimuli into the Experiment 1 paradigm, Experiment 2b demonstrated that the divergence between Experiments 1 and 2 was not due to low-level stimulus differences. Taken together, our results reveal an asymmetric effect of contrast changes during binocular rivalry. This finding highlights the interplay between subliminal sensory processing of contrast changes and conscious awareness, shedding light on developing theoretical models of binocular rivalry.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"14"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12395786/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}