Nino Sharvashidze, Matteo Valsecchi, Alexander C Schütz
{"title":"Transsaccadic perception of changes in object regularity.","authors":"Nino Sharvashidze, Matteo Valsecchi, Alexander C Schütz","doi":"10.1167/jov.24.13.3","DOIUrl":"10.1167/jov.24.13.3","url":null,"abstract":"<p><p>The visual system compensates for differences between peripheral and foveal vision using different mechanisms. Although peripheral vision is characterized by higher spatial uncertainty and lower resolution than foveal vision, observers reported objects to be less distorted and less blurry in the periphery than the fovea in a visual matching task during fixation (Valsecchi et al., 2018). Here, we asked whether a similar overcompensation could be found across saccadic eye movements and whether it would bias the detection of transsaccadic changes in object regularity. The blur and distortion levels of simple geometric shapes were manipulated in the Eidolons algorithm (Koenderink et al., 2017). In an appearance discrimination task, participants had to judge the appearance of blur (experiment 1) and distortion (experiment 2) separately before and after a saccade. Objects appeared less blurry before a saccade (in the periphery) than after a saccade (in the fovea). No differences were found in the appearance of distortion. In a change discrimination task, participants had to judge if blur (experiment 1) and distortion (experiment 2) either increased or decreased during a saccade. Overall, they showed a tendency to report an increase in both blur and distortion across saccades. The precision of the responses was improved by a 200-ms postsaccadic blank. Results from the change discrimination task of both experiments suggest that a transsaccadic decrease in regularity is more visible, compared to an increase in regularity. In line with the previous study that reported a peripheral overcompensation in the visual matching task, we found a similar mechanism, exhibiting a phenomenological sharpening of blurry edges before a saccade. These results generalize peripheral-foveal differences observed during fixation to the here tested dynamic, transsaccadic conditions where they contribute to biases in transsaccadic change detection.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11627247/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guido Marco Cicchini, Giovanni D'Errico, David Charles Burr
{"title":"Color crowding considered as adaptive spatial integration.","authors":"Guido Marco Cicchini, Giovanni D'Errico, David Charles Burr","doi":"10.1167/jov.24.13.9","DOIUrl":"10.1167/jov.24.13.9","url":null,"abstract":"<p><p>Crowding is the inability to recognize an object in clutter, classically considered a fundamental low-level bottleneck to object recognition. Recently, however, it has been suggested that crowding, like predictive phenomena such as serial dependence, may result from optimizing strategies that exploit redundancies in natural scenes. This notion leads to several testable predictions, such as crowding being greater for nonsalient targets and, counterintuitively, that flanker interference should be associated with higher precision in judgements, leading to a lower overall error rate. Here we measured color discrimination for targets flanked by stimuli of variable color. The results verified both predictions, showing that although crowding can affect object recognition, it may be better understood not as a processing bottleneck, but rather as a consequence of mechanisms evolved to efficiently exploit the spatial redundancies of the natural world. Analyses of reaction times of judgments shows that the integration occurs at sensory rather than decisional levels.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11636666/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Influence of Fresnel effects on the glossiness and perceived depth of depth-scaled glossy objects.","authors":"Franz Faul, Christian Robbes","doi":"10.1167/jov.24.13.1","DOIUrl":"10.1167/jov.24.13.1","url":null,"abstract":"<p><p>Fresnel effects, that is, shape-dependent changes in the strength of specular reflection from glossy objects, can lead to large changes in reflection strength when objects are scaled along the viewing axis. In an experiment, we scaled sphere-like bumpy objects with fixed material parameters in the depth direction and then measured with and without Fresnel effects how this influences the gloss impression, gloss constancy, and perceived depth. The results show that Fresnel effects in this case lead to a strong increase in gloss with depth, indicating lower gloss constancy than without them, but that they improve depth perception. In addition, we used inverse rendering to investigate the extent to which Fresnel effects in a rendered image limit the possible object shapes in the underlying scene. We found that, for a static monocular view of an unknown object, Fresnel effects by themselves provide only a weak constraint on the overall shape of the object.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11614003/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tabea-Maria Haase, Anina N Rich, Iain D Gilchrist, Christopher Kent
{"title":"Attention moderates the motion silencing effect for dynamic orientation changes in a discrimination task.","authors":"Tabea-Maria Haase, Anina N Rich, Iain D Gilchrist, Christopher Kent","doi":"10.1167/jov.24.13.13","DOIUrl":"10.1167/jov.24.13.13","url":null,"abstract":"<p><p>Being able to detect changes in our visual environment reliably and quickly is important for many daily tasks. The motion silencing effect describes a decrease in the ability to detect feature changes for faster moving objects compared with stationary or slowly moving objects. One theory is that spatiotemporal receptive field properties in early vision might account for the silencing effect, suggesting that its origins are low-level visual processing. Here, we explore whether spatial attention can modulate motion silencing of orientation changes to gain greater understanding of the underlying mechanisms. In Experiment 1, we confirm that the motion silencing effect occurs for the discrimination of orientation changes. In Experiment 2, we use a Posner-style cueing paradigm to investigate whether manipulating covert attention modulates motion silencing for orientation. The results show a clear spatial cueing effect: Participants were able to discriminate orientation changes successfully at higher velocities when the cue was valid compared to neutral cues and performance was worst when the cue was invalid. These results show that motion silencing can be modulated by directing spatial attention toward a moving target and provides support for a role for higher level processes, such as attention, in motion silencing of orientation changes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"13"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11684489/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrections to: Mapping spatial frequency preferences across human primary visual cortex.","authors":"","doi":"10.1167/jov.24.13.8","DOIUrl":"10.1167/jov.24.13.8","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629902/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relating visual and pictorial space: Integration of binocular disparity and motion parallax.","authors":"Xiaoye Michael Wang, Nikolaus F Troje","doi":"10.1167/jov.24.13.7","DOIUrl":"10.1167/jov.24.13.7","url":null,"abstract":"<p><p>Traditionally, perceptual spaces are defined by the medium through which the visual environment is conveyed (e.g., in a physical environment, through a picture, or on a screen). This approach overlooks the distinct contributions of different types of visual information, such as binocular disparity and motion parallax, that transform different visual environments to yield different perceptual spaces. The current study proposes a new approach to describe different perceptual spaces based on different visual information. A geometrical model was developed to delineate the transformations imposed by binocular disparity and motion parallax, including (a) a relief depth scaling along the observer's line of sight and (b) pictorial distortions that rotate the entire perceptual space, as well as the invariant properties after these transformations, including distance, three-dimensional shape, and allocentric direction. The model was fitted to the behavioral results from two experiments, wherein the participants rotated a human figure to point at different targets in virtual reality. The pointer was displayed on a virtual frame that could differentially manipulate the availability of binocular disparity and motion parallax. The model fitted the behavioral results well, and model comparisons validated the relief scaling in the form of depth expansion and the pictorial distortions in the form of an isotropic rotation. Fitted parameters showed that binocular disparity renders distance invariant but also introduces relief depth expansion to three-dimensional objects, whereas motion parallax keeps allocentric direction invariant. We discuss the implications of the mediating effects of binocular disparity and motion parallax when connecting different perceptual spaces.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11640909/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142803030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samantha L Strong, Ayah I Al-Rababah, Leon N Davies
{"title":"Increased light scatter in simulated cataracts degrades speed perception.","authors":"Samantha L Strong, Ayah I Al-Rababah, Leon N Davies","doi":"10.1167/jov.24.13.12","DOIUrl":"10.1167/jov.24.13.12","url":null,"abstract":"<p><p>Changes in contrast and blur affect speed perception, raising the question of whether natural changes in the eye (e.g., cataract) that induce light scatter may affect motion perception. This study investigated whether light scatter, similar to that present in a cataractous eye, could have deleterious effects on speed perception. Experiment 1: Participants (n = 14) completed a speed discrimination task using random dot kinematograms. The just-noticeable difference was calculated for two reference speeds (slow; fast) and two directions (translational; radial). Light scatter was induced with filters across four levels: baseline, mild, moderate, severe. Repeated measures analyses of variance (ANOVAs) found significant main effects of scatter on speed discrimination for radial motion (slow F(3, 39) = 7.33, p < 0.01; fast F(3, 39) = 4.80, p < 0.01). Discrimination was attenuated for moderate (slow p = 0.021) and severe (slow p = 0.024; fast p = 0.017) scatter. No effect was found for translational motion. Experiment 2: Participants (n = 14) completed a time-to-contact experiment for three speeds (slow, moderate, fast). Light scatter was induced as Experiment 1. Results show increasing scatter led to perceptual slowing. Repeated measures ANOVAs revealed that moderate (F(3, 39) = 3.57, p = 0.023) and fast (F(1.42, 18.48) = 5.63, p = 0.020) speeds were affected by the increasing light scatter. Overall, speed discrimination is attenuated by increasing light scatter, which seems to be driven by a perceptual slowing of stimuli.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668353/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Low sensitivity for orientation in texture similarity ratings.","authors":"Hans-Christoph Nothdurft","doi":"10.1167/jov.24.13.14","DOIUrl":"10.1167/jov.24.13.14","url":null,"abstract":"<p><p>Research on visual texture perception in the last decades was often devoted to segmentation and region segregation. In this report, I address a different aspect, that of texture identification and similarity ratings between texture fields with different texture properties superimposed. In a series of experiments, I noticed that certain feature dimensions were considered as more important for similarity evaluation than others. A particularly low ranking is given to orientation. This paper reports data from two test series: a comparison of color and line orientation and a comparison of two purely spatial properties, texture granularity (spatial frequency) and texture orientation. In both experiments, observers tended to ignore orientation when grouping texture patches for similarity and instead looked for similarities in the second dimension, color or spatial frequency, even across different orientations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142865879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sensory feedback modulates Weber's law of both perception and action.","authors":"Ailin Deng, Evan Cesanek, Fulvio Domini","doi":"10.1167/jov.24.13.10","DOIUrl":"10.1167/jov.24.13.10","url":null,"abstract":"<p><p>Weber's law states that estimation noise is proportional to stimulus intensity. Although this holds in perception, it appears absent in visually guided actions where response variability does not scale with object size. This discrepancy is often attributed to dissociated visual processing for perception and action. Here, we explore an alternative explanation: It is the influence of sensory feedback on motor output that causes this apparent violation. Our research investigated response variability across repeated grasps relative to object size and found that the variability pattern is contingent on sensory feedback. Pantomime grasps with neither online visual feedback nor final haptic feedback showed variability that scaled with object size, as expected by Weber's law. However, this scaling diminished when sensory feedback was available, either directly present in the movement (Experiment 1) or in adjacent movements in the same block (Experiment 2). Moreover, a simple visual cue indicating performance error similarly reduced the scaling of variability with object size in manual size estimates, the perceptual counterpart of grasping responses (Experiment 3). These results support the hypothesis that sensory feedback modulates motor responses and their associated variability across both action and perception tasks. Post hoc analyses indicated that the reduced scaling of response variability with object size could be due to changes in motor mapping, the process mapping visual size estimates to motor outputs. Consequently, the absence of Weber's law in action responses might not indicate distinct visual processing but rather adaptive changes in motor strategies based on sensory feedback.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11654771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142838998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Impaired processing of spatiotemporal visual attention engagement deficits in Chinese children with developmental dyslexia.","authors":"Baojun Duan, Xiaoling Tang, Datao Wang, Yanjun Zhang, Guihua An, Huan Wang, Aibao Zhou","doi":"10.1167/jov.24.13.2","DOIUrl":"10.1167/jov.24.13.2","url":null,"abstract":"<p><p>Emerging evidence suggests that visuospatial attention plays an important role in reading among Chinese children with dyslexia. Additionally, numerous studies have shown that Chinese children with dyslexia have deficits in their visuospatial attention orienting; however, the visual attention engagement deficits in Chinese children with dyslexia remain unclear. Therefore, we used a visual attention masking (AM) paradigm to characterize the spatiotemporal distribution of visual attention engagement in Chinese children with dyslexia. AM refers to impaired identification of the first (S1) of two rapidly sequentially presented mask objects. In the present study, S1 was always centrally displayed, whereas the spatial position of S2 (left, middle, or right) and the S1-S2 interval were manipulated. The results revealed a specific temporal deficit of visual attentional masking in Chinese children with dyslexia. The mean accuracy rate for developmental dyslexia (DD) in the middle spatial position was significantly lower than that in the left spatial position at a stimulus onset asynchrony (SOA) of 140 ms, compared with chronological age (CA). Moreover, we further observed spatial deficits of visual attentional masking in the three different spatial positions. Specifically, in the middle spatial position, the AM effect of DD was significantly larger for the 140-ms SOA than for the 250-ms and 600-ms SOA compared with CA. Our results suggest that Chinese children with dyslexia are significantly impaired in visual attentional engagement and that spatiotemporal visual attentional engagement may play a special role in Chinese reading.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11620018/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}