Vision ResearchPub Date : 2024-06-08DOI: 10.1016/j.visres.2024.108438
Crystal Guo, Akihito Maruya, Qasim Zaidi
{"title":"Complexity of mental geometry for 3D pose perception","authors":"Crystal Guo, Akihito Maruya, Qasim Zaidi","doi":"10.1016/j.visres.2024.108438","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108438","url":null,"abstract":"<div><p>Biological visual systems rely on pose estimation of 3D objects to navigate and interact with their environment, but the neural mechanisms and computations for inferring 3D poses from 2D retinal images are only partially understood, especially where stereo information is missing. We previously presented evidence that humans infer the poses of 3D objects lying centered on the ground by using the geometrical back-transform from retinal images to viewer-centered world coordinates. This model explained the almost veridical estimation of poses in real scenes and the illusory rotation of poses in obliquely viewed pictures, which includes the “pointing out of the picture” phenomenon. Here we test this model for more varied configurations and find that it needs to be augmented. Five observers estimated poses of sloped, elevated, or off-center 3D sticks in each of 16 different poses displayed on a monitor in frontal and oblique views. Pose estimates in scenes and pictures showed remarkable accuracy and agreement between observers, but with a systematic fronto-parallel bias for oblique poses similar to the ground condition. The retinal projection of the pose of an object sloped wrt the ground depends on the slope. We show that observers’ estimates can be explained by the back-transform derived for close to the correct slope. The back-transform explanation also applies to obliquely viewed pictures and to off-center objects and elevated objects, making it more likely that observers use internalized perspective geometry to make 3D pose inferences while actively incorporating inferences about other aspects of object placement.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141290565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-06-01DOI: 10.1016/j.visres.2024.108437
Vivian Wu , Malgorzata Swider , Alexander Sumaroka , Valerie L. Dufour , Joseph E. Vance , Tomas S. Aleman , Gustavo D. Aguirre , William A. Beltran , Artur V. Cideciyan
{"title":"Corrigendum to “Retinal response to light exposure in BEST1-mutant dogs evaluated with ultra-high resolution OCT” [Vis. Res. 218 (2024) 108379]","authors":"Vivian Wu , Malgorzata Swider , Alexander Sumaroka , Valerie L. Dufour , Joseph E. Vance , Tomas S. Aleman , Gustavo D. Aguirre , William A. Beltran , Artur V. Cideciyan","doi":"10.1016/j.visres.2024.108437","DOIUrl":"10.1016/j.visres.2024.108437","url":null,"abstract":"","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000816/pdfft?md5=088d8df3a632c974706ff295876259d6&pid=1-s2.0-S0042698924000816-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141200611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-05-30DOI: 10.1016/j.visres.2024.108436
June Cutler , Alexandre Bodet , Josée Rivest , Patrick Cavanagh
{"title":"The word superiority effect overcomes crowding","authors":"June Cutler , Alexandre Bodet , Josée Rivest , Patrick Cavanagh","doi":"10.1016/j.visres.2024.108436","DOIUrl":"10.1016/j.visres.2024.108436","url":null,"abstract":"<div><p>Crowding and the word superiority effect are two perceptual phenomena that influence reading. The identification of the inner letters of a word can be hindered by crowding from adjacent letters, but it can be facilitated by the word context itself (the word superiority effect). In the present study, strings of four-letters (words and non-words) with different inter-letter spacings (ranging from an optimal spacing to produce crowding to a spacing too large to produce crowding) were presented briefly in the periphery and participants were asked to identify the third letter of the string. Each word had a partner word that was identical except for its third letter (e.g., COLD, CORD) so that guessing as the source of the improved performance for words could be ruled out. Unsurprisingly, letter identification accuracy for words was better than non-words. For non-words, it was lowest at closer spacings, confirming crowding. However, for words, accuracy remained high at all inter-letter spacings showing that crowding did not prevent identification of the inner letters. This result supports models of “holistic” word recognition where partial cues can lead to recognition without first identifying individual letters. Once the word is recognized, its inner letters can be recovered, despite their feature loss produced by crowding.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141184789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-05-27DOI: 10.1016/j.visres.2024.108434
Igor Iezhitsa, Renu Agarwal, Puneet Agarwal
{"title":"Unveiling enigmatic essence of Sphingolipids: A promising avenue for glaucoma treatment","authors":"Igor Iezhitsa, Renu Agarwal, Puneet Agarwal","doi":"10.1016/j.visres.2024.108434","DOIUrl":"10.1016/j.visres.2024.108434","url":null,"abstract":"<div><p>Treatment of glaucoma, the leading cause of irreversible blindness, remains challenging. The apoptotic loss of retinal ganglion cells (RGCs) in glaucoma is the pathological hallmark. Current treatments often remain suboptimal as they aim to halt RGC loss secondary to reduction of intraocular pressure. The pathophysiological targets for exploring direct neuroprotective approaches, therefore are highly relevant. Sphingolipids have emerged as significant target molecules as they are not only the structural components of various cell constituents, but they also serve as signaling molecules that regulate molecular pathways involved in cell survival and death. Investigations have shown that a critical balance among various sphingolipid species, particularly the ceramide and sphingosine-1-phosphate play a role in deciding the fate of the cell. In this review we briefly discuss the metabolic interconversion of sphingolipid species to get an insight into “sphingolipid rheostat”, the dynamic balance among metabolites. Further we highlight the role of sphingolipids in the key pathophysiological mechanisms that lead to glaucomatous loss of RGCs. Lastly, we summarize the potential drug candidates that have been investigated for their neuroprotective effects in glaucoma via their effects on sphingolipid axis.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141160476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-05-20DOI: 10.1016/j.visres.2024.108433
Maria Dvoeglazova , Tadamasa Sawada
{"title":"A role of rectangularity in perceiving a 3D shape of an object","authors":"Maria Dvoeglazova , Tadamasa Sawada","doi":"10.1016/j.visres.2024.108433","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108433","url":null,"abstract":"<div><p>Rectangularity and perpendicularity of contours are important properties of 3D shape for the visual system and the visual system can use them as<!--> <em>a priori</em> <!-->constraints for perceiving<!--> <!-->shape veridically. The present<!--> <!-->article provides a comprehensive review of<!--> <!-->prior<!--> <!-->studies<!--> <!-->of<!--> <!-->the perception of rectangularity and perpendicularity and<!--> <!-->it<!--> <!-->discusses<!--> <!-->their effects on<!--> <!-->3D shape perception from both theoretical and empirical<!--> <!-->approaches. It has been shown that the visual system is biased to perceive a rectangular 3D shape from a 2D image. We thought that this bias might be attributable to the likelihood of a rectangular interpretation but this hypothesis is not supported by the results of our psychophysical experiment. Note that the perception of<!--> <!-->a rectangular shape cannot be explained solely on the basis of geometry. A rectangular shape is perceived from an image that is inconsistent with a rectangular interpretation. To address this<!--> <!-->issue, we developed a computational model that can recover a rectangular shape from an image of a parallelopiped. The model allows the recovered shape to be slightly inconsistent so that the recovered shape satisfies the <em>a priori</em> constraints of maximum compactness and minimal surface area. This model captures some<!--> <!-->of the<!--> <!-->phenomena<!--> <!-->associated with<!--> <!-->the perception of the rectangular shape that were reported in<!--> <!-->prior<!--> <!-->studies. This finding suggests that rectangularity works for shape perception by incorporating<!--> <!-->it<!--> <!-->with some<!--> <!-->additional<!--> <!-->constraints.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-05-13DOI: 10.1016/j.visres.2024.108424
Christof Elias Topfstedt , Luca Wollenberg , Thomas Schenk
{"title":"Training enables substantial decoupling of visual attention and saccade preparation","authors":"Christof Elias Topfstedt , Luca Wollenberg , Thomas Schenk","doi":"10.1016/j.visres.2024.108424","DOIUrl":"10.1016/j.visres.2024.108424","url":null,"abstract":"<div><p>Visual attention is typically shifted toward the targets of upcoming saccadic eye movements. This observation is commonly interpreted in terms of an obligatory coupling between attentional selection and oculomotor programming. Here, we investigated whether this coupling is facilitated by a habitual expectation of spatial congruence between visual and motor targets. To this end, we conducted a dual-task (i.e., concurrent saccade task and visual discrimination task) experiment in which male and female participants were trained to either anticipate spatial congruence or incongruence between a saccade target and an attention probe stimulus. To assess training-induced effects of expectation on premotor attention allocation, participants subsequently completed a test phase in which the attention probe position was randomized. Results revealed that discrimination performance was systematically biased toward the expected attention probe position, irrespective of whether this position matched the saccade target or not. Overall, our findings demonstrate that visual attention can be substantially decoupled from ongoing oculomotor programming and suggest an important role of habitual expectations in the attention-action coupling.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000683/pdfft?md5=ef8d8e46b93a589da04a1a017591cff1&pid=1-s2.0-S0042698924000683-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140923268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-05-10DOI: 10.1016/j.visres.2024.108423
Charlotte Falkenberg, Franz Faul
{"title":"Transparent layer constancy improves with increased naturalness of the scene","authors":"Charlotte Falkenberg, Franz Faul","doi":"10.1016/j.visres.2024.108423","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108423","url":null,"abstract":"<div><p>The extent to which hue, saturation, and transmittance of thin light-transmitting layers are perceived as constant when the illumination changes (<em>transparent layer constancy</em>, TLC) has previously been investigated with simple stimuli in asymmetric matching tasks. In this task, a target filter is presented under one illumination and a second filter is matched under a second illumination. Although two different illuminations are applied in the stimulus generation, there is no guarantee that the stimulus will be interpreted appropriately by the visual system. In previous work, we found a higher degree of TLC when both illuminations were presented alternately than when they were presented simultaneously, which could be explained, for example, by an increased plausibility of an illumination change. In this work, we test whether TLC can also be increased in simultaneous presentation when the filter’s belonging to a particular illumination context is made more likely by additional cues. To this end, we presented filters in differently lit areas of complex, naturalistically rendered 3D scenes containing different types of cues to the prevailing illumination, such as scene geometry, object shading, and cast shadows. We found higher degrees of TLC in such complex scenes than in colorimetrically similar simple 2D color mosaics, which is consistent with the results of similar studies in the area of color constancy. To test which of the illumination cues available in the scenes are actually used, the different types of cues were successively removed from the naturalistically rendered complex scene. A total of eight levels of scene complexity were examined. As expected, TLC decreased the more cues were removed. Object shading and illumination gradients due to shadow cast were both found to have a positive effect on TLC. A second filter had a small positive effect on TLC when added in strongly reduced scenes, but not in the complex scenes that already provide many cues about the illumination context of the filter.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0042698924000671/pdfft?md5=e8db80ddadb6f0b3b906a6eb1b041552&pid=1-s2.0-S0042698924000671-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140901204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-05-07DOI: 10.1016/j.visres.2024.108422
Joshua A. Solomon , Fintan Nagle , Christopher W. Tyler
{"title":"Spatial summation for motion detection","authors":"Joshua A. Solomon , Fintan Nagle , Christopher W. Tyler","doi":"10.1016/j.visres.2024.108422","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108422","url":null,"abstract":"<div><p>We used the psychophysical summation paradigm to reveal some spatial characteristics of the mechanism responsible for detecting a motion-defined visual target in central vision. There has been much previous work on spatial summation for motion detection and direction discrimination, but none has assessed it in terms of the velocity threshold or used velocity noise to provide a measure of the efficiency of the velocity processing mechanism. Motion-defined targets were centered within square fields of randomly selected gray levels. The motion was produced within the disk-shaped target region by shifting the pixels rightwards for 0.2 s. The uniform target motion was perturbed by Gaussian motion noise in horizontal strips of 16 pixels. Independent variables were field size, the diameter of the disk target, and the variance of an independent perturbation added to the (signed) velocity of each 16-pixel strip. The dependent variable was the threshold velocity for target detection. Velocity thresholds formed swoosh-shaped (descending, then ascending) functions of target diameter. Minimum values were obtained when targets subtended approximately 2 degrees of visual angle. The data were fit with a continuum of models, extending from the theoretically ideal observer through various inefficient and noisy refinements thereof. In particular, we introduce the concept of sparse sampling to account for the relative inefficiency of the velocity thresholds. The best fits were obtained from a model observer whose responses were determined by comparing the velocity profile of each stimulus with a limited set of sparsely sampled “DoG” templates, each of which is the product of a random binary array and the difference between two 2-D Gaussian density functions.</p></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":null,"pages":null},"PeriodicalIF":1.8,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S004269892400066X/pdfft?md5=4d383e7288048973388d21b75e0398c0&pid=1-s2.0-S004269892400066X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140844313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}