Vision ResearchPub Date : 2024-12-20DOI: 10.1016/j.visres.2024.108536
Zoë Little, Colin W G Clifford
{"title":"The effects of feedback and task accuracy in serial dependence to orientation.","authors":"Zoë Little, Colin W G Clifford","doi":"10.1016/j.visres.2024.108536","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108536","url":null,"abstract":"<p><p>Assimilative serial dependence in perception occurs where responses about a stimulus (e.g., orientation) are biased towards previously seen perceptual information (e.g., the orientation of the stimulus shown on the previous trial). This bias may occur to perceptual information from the previous trial, or to the response or decision made on the previous trial. We asked whether providing response feedback could change the serial dependence effect on the following trial. Twenty-one participants completed a task in which they adjusted an on-screen pointer to reproduce the orientation of a briefly-presented Gabor stimulus. They received feedback about the accuracy of their response that either reflected their actual accuracy or was random. We found significant positive biases to the stimulus and response only when the participant had received positive (\"correct!\") feedback on that trial. When the inducer response had been incorrect, the effect was significant only to the response itself and not to the stimulus. Overall, we suggest that our participants demonstrated a bias towards the percept from the previous trial, which is better represented by the response than the stimulus for incorrect trials, and that this effect can be modulated post-perceptually by feedback.</p>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"227 ","pages":"108536"},"PeriodicalIF":1.5,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142872885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-12-06DOI: 10.1016/j.visres.2024.108533
Suva Roy
{"title":"Emerging strategies targeting genes and cells in glaucoma.","authors":"Suva Roy","doi":"10.1016/j.visres.2024.108533","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108533","url":null,"abstract":"<p><p>Glaucoma comprises a heterogeneous set of eye conditions that cause progressive vision loss. Glaucoma has a complex etiology, with different genetic and non-genetic risk factors that differ across populations. Although difficult to diagnose in early stages, compromised cellular signaling, dysregulation of genes, and homeostatic imbalance are common precursors to injury and subsequent death of retinal ganglion cells (RGCs). Lowering intraocular pressure (IOP) remains the primary approach for managing glaucoma but IOP alone does not explain all glaucoma risks. Orthogonal approaches such as large-scale genetic screening, combined with studies of animal models have been instrumental in identifying genes and molecular pathways involved in glaucoma pathogenesis. Cell type dependent vulnerability among RGCs can reveal genetic basis for specific visual deficits. A growing body of knowledge and availability of modern tools to perform targeted assessments of cellular health in different animal models facilitate development of effective and timely interventions for vision rescue. This review highlights recent findings on genes, molecules, and cell types in the context of glaucoma pathophysiology and treatment.</p>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"227 ","pages":"108533"},"PeriodicalIF":1.5,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-12-06DOI: 10.1016/j.visres.2024.108525
Cameron Kyle-Davidson, Oscar Solis, Stephen Robinson, Ryan Tze Wang Tan, Karla K Evans
{"title":"Scene complexity and the detail trace of human long-term visual memory.","authors":"Cameron Kyle-Davidson, Oscar Solis, Stephen Robinson, Ryan Tze Wang Tan, Karla K Evans","doi":"10.1016/j.visres.2024.108525","DOIUrl":"https://doi.org/10.1016/j.visres.2024.108525","url":null,"abstract":"<p><p>Humans can remember a vast amount of scene images; an ability often attributed to encoding only low-fidelity gist traces of a scene. Instead, studies show a surprising amount of detail is retained for each scene image allowing them to be distinguished from highly similar in-category distractors. The gist trace for images can be relatively easily captured through both computational and behavioural techniques, but capturing detail is much harder. While detail can be broadly estimated at the categorical level (e.g. man-made scenes more complex than natural), there is a lack of both ground-truth detail data at the sample level and a way to operationalise it for measurement purposes. Here through three different studies, we investigate whether the perceptual complexity of scenes can serve as a suitable analogue for the detail present in a scene, and hence whether we can use complexity to determine the relationship between scene detail and visual long term memory for scenes. First we examine this relationship directly using the VISCHEMA datasets, to determine whether the perceived complexity of a scene interacts with memorability, finding a significant positive correlation between complexity and memory, in contrast to the hypothesised U-shaped relation often proposed in the literature. In the second study we model complexity via artificial means, and find that even predicted measures of complexity still correlate with the overall ground-truth memorability of a scene, indicating that complexity and memorability cannot be easily disentangled. Finally, we investigate how cognitive load impacts the influence of scene complexity on image memorability. Together, findings indicate complexity and memorability do vary non-linearly, but generally it is limited to the extremes of the image complexity ranges. The effect of complexity on memory closely mirrors previous findings that detail enhances memory, and suggests that complexity is a suitable analogue for detail in visual long-term scene memory.</p>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"227 ","pages":"108525"},"PeriodicalIF":1.5,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142792343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-11-29DOI: 10.1016/j.visres.2024.108524
Zhenzhen Li, Yu Liu, Yuechen Zhu, Ming Ronnier Luo
{"title":"Visual comfort models based on coloured text and neutral background combinations","authors":"Zhenzhen Li, Yu Liu, Yuechen Zhu, Ming Ronnier Luo","doi":"10.1016/j.visres.2024.108524","DOIUrl":"10.1016/j.visres.2024.108524","url":null,"abstract":"<div><div>Reading on mobile phones can cause visual discomfort, negatively affecting visual health. Most studies have focused on neutral text-background combinations, with limited validation for coloured text-background combinations. This study investigates the impact of coloured text on neutral backgrounds and the colour difference between text and background on visual comfort during digital reading. A psychophysical experiment was conducted, where 230 images of coloured text on neutral backgrounds were evaluated by 20 participants using a 6-point scale for visual comfort. Results showed that reading coloured text on a black background generally provided higher comfort compared to a white background. Additionally, visual comfort decreased as the text colour approached that of the background. The effect of text hue on comfort was not significant. Furthermore, several visual comfort models for mobile displays were developed and compared. The VC<sub>1-LAB</sub> model is based on Bern’s attributes, while the VC<sub>2-LAB</sub> model focuses on the lightness of text and background. The VC<sub>3-LAB</sub> model includes both lightness and chroma attributes. Comparisons revealed that VC<sub>3-LAB</sub> outperformed the others in predicting visual comfort, highlighting the importance of lightness and chroma in improving predictive accuracy. Therefore, the VC<sub>3-LAB</sub> model is useful for evaluating the visual comfort of coloured text on neutral backgrounds.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"227 ","pages":"Article 108524"},"PeriodicalIF":1.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142747178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-11-29DOI: 10.1016/j.visres.2024.108500
Junhao Liang, Li Zhaoping
{"title":"Trans-saccadic integration for object recognition peters out with pre-saccadic object eccentricity as target-directed saccades become more saliency-driven","authors":"Junhao Liang, Li Zhaoping","doi":"10.1016/j.visres.2024.108500","DOIUrl":"10.1016/j.visres.2024.108500","url":null,"abstract":"<div><div>Bringing objects from peripheral locations to fovea via saccades facilitates their recognition. Human observers integrate pre- and post-saccadic information for recognition. This integration has only been investigated using instructed saccades to prescribed locations. Typically, the target has a fixed pre-saccadic location in an uncluttered scene and is viewed by a pre-determined post-saccadic duration. Consequently, whether trans-saccadic integration is limited or absent when the pre-saccadic target eccentricity is too large in cluttered scenes in unknown. Our study revealed this limit during visual exploration, when observers decided themselves when and to where to make their saccades. We asked thirty observers (400 trials each) to find and report as quickly as possible a target amongst 404 non-targets in an image spanning <span><math><mrow><mn>57</mn><mo>.</mo><mn>3</mn><mo>°</mo><mo>×</mo><mn>33</mn><mo>.</mo><mn>8</mn><mo>°</mo></mrow></math></span> in visual angle. We measured the target’s pre-saccadic eccentricity <span><math><mi>e</mi></math></span>, the duration <span><math><msub><mrow><mi>T</mi></mrow><mrow><mi>p</mi><mi>r</mi><mi>e</mi></mrow></msub></math></span> of the fixation before the saccade, and the post-saccadic foveal viewing duration <span><math><msub><mrow><mi>T</mi></mrow><mrow><mi>p</mi><mi>o</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span>. This <span><math><msub><mrow><mi>T</mi></mrow><mrow><mi>p</mi><mi>o</mi><mi>s</mi><mi>t</mi></mrow></msub></math></span> increased with <span><math><mi>e</mi></math></span> before starting to saturate around eccentricity <span><math><mrow><msub><mrow><mi>e</mi></mrow><mrow><mi>p</mi></mrow></msub><mo>=</mo><mn>10</mn><mo>°</mo><mo>−</mo><mn>20</mn><mo>°</mo></mrow></math></span>. Meanwhile, <span><math><msub><mrow><mi>T</mi></mrow><mrow><mi>p</mi><mi>r</mi><mi>e</mi></mrow></msub></math></span> increased much more slowly with <span><math><mi>e</mi></math></span> and started decreasing before <span><math><msub><mrow><mi>e</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span>. These observations imply the following at sufficiently large pre-saccadic eccentricities: the trans-saccadic integration ceases, target recognition relies exclusively on post-saccadic foveal vision, decision to saccade to the target relies exclusively on target saliency rather than identification. These implications should be applicable to general behavior, although <span><math><msub><mrow><mi>e</mi></mrow><mrow><mi>p</mi></mrow></msub></math></span> should depend on object and scene properties. They are consistent with the Central-peripheral Dichotomy that central and peripheral vision are specialized for seeing and looking, respectively.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"226 ","pages":"Article 108500"},"PeriodicalIF":1.5,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142746138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-11-26DOI: 10.1016/j.visres.2024.108489
Li Zhaoping
{"title":"Applying the efficient coding principle to understand encoding of multisensory and multimodality sensory signals","authors":"Li Zhaoping","doi":"10.1016/j.visres.2024.108489","DOIUrl":"10.1016/j.visres.2024.108489","url":null,"abstract":"<div><div>Sensory neurons often encode multisensory or multimodal signals. For example, many medial superior temporal (MST) neurons are tuned to heading direction of self-motion based on visual (optic flow) signals and vestibular signals. Middle temporal (MT) cortical neurons are tuned to object depth from signals of two visual modalities: motion parallax and binocular disparity. A MST neuron’s preferred heading directions from different senses can be congruent (matched) or opposite from each other. Similarly, the preferred depths of a MT neuron from the two modalities are congruent in some neurons and opposite in other neurons. While the congruent tuning appears natural for cue integration, the functions of the opposite tuning have been puzzling. This paper explains these tunings from the efficient coding principle that sensory encoding extracts as much sensory information as possible while minimizing neural cost. It extends the previous applications of this principle to understand neural receptive fields in retina and the primary visual cortex, particularly multimodal encoding of cone signals or binocular signals. Congruent and opposite sensory signals that excite the congruent and opposite neurons, respectively, are the decorrelated sensory components that provide a general purpose, efficient, representation of sensory inputs before task specific object segmentation and recognition. It can be extended to encoding signals from more than two sensory sources, e.g., from three cone types. This framework also predicts a wider tuning width for the opposite than congruent neurons, neurons that are neither congruent nor opposite, and how neural receptive fields adapt to statistical changes of sensory environments.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"226 ","pages":"Article 108489"},"PeriodicalIF":1.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-11-26DOI: 10.1016/j.visres.2024.108523
Cai Wingfield , Andrew Soltan , Ian Nimmo-Smith , William D. Marslen-Wilson , Andrew Thwaites
{"title":"Tracking cortical entrainment to stages of optic-flow processing","authors":"Cai Wingfield , Andrew Soltan , Ian Nimmo-Smith , William D. Marslen-Wilson , Andrew Thwaites","doi":"10.1016/j.visres.2024.108523","DOIUrl":"10.1016/j.visres.2024.108523","url":null,"abstract":"<div><div>In human visual processing, information from the visual field passes through numerous transformations before perceptual attributes such as motion are derived. Determining the sequence of transforms involved in the perception of visual motion has been an active field since the 1940s. One plausible family of models are the spatiotemporal energy models, based on computations of motion energy computed from the spatiotemporal features the visual field. One of the most venerated is that of <span><span>Heeger (1988)</span></span>, which hypotheses that motion is estimated by matching the predicted spatiotemporal energy in frequency space. In this study, we investigate the plausibility of Heeger’s model by testing for evidence of cortical entrainment to its components. Entrainment of cortical activity to these components was estimated using measurements of electro- and magnetoencephalographic (EMEG) activity, recorded while healthy subjects watched videos of dots moving left and right across their visual field. We find entrainment to several components of Heeger’s model bilaterally in occipital lobe regions, including representations of motion energy at a latency of 80 ms, overall velocity at 95 ms, and acceleration at 130 ms. We find little evidence of entrainment to displacement. We contrast Heeger’s biologically inspired model with alternative baseline models, finding that Heeger’s model provides a closer fit to the observed data. These results help shed light on the processes through which perception of motion arises in the visual processing stream.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"226 ","pages":"Article 108523"},"PeriodicalIF":1.5,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-11-23DOI: 10.1016/j.visres.2024.108522
Maral Namdari , Fiona S. McDonnell
{"title":"Extracellular vesicles as emerging players in glaucoma: Mechanisms, biomarkers, and therapeutic targets","authors":"Maral Namdari , Fiona S. McDonnell","doi":"10.1016/j.visres.2024.108522","DOIUrl":"10.1016/j.visres.2024.108522","url":null,"abstract":"<div><div>In recent years, extracellular vesicles (EVs) have attracted significant scientific interest due to their widespread distribution, their potential as disease biomarkers, and their promising applications in therapy. Encapsulated by lipid bilayers these nanovesicles include small extracellular vesicles (sEV) (30–150 nm), microvesicles (100–1000 nm), and apoptotic bodies (100–5000 nm) and are essential for cellular communication, immune responses, biomolecular transport, and physiological regulation. As they reflect the condition and functionality of their originating cells, EVs play critical roles in numerous physiological processes and diseases. Therefore, EVs offer valuable opportunities for uncovering disease mechanisms, enhancing drug delivery systems, and identifying novel biomarkers. In the context of glaucoma, a leading cause of irreversible blindness, the specific roles of EVs are still largely unexplored.</div><div>This review examines the emerging role of EVs in the pathogenesis of glaucoma, with a focus on their potential as diagnostic biomarkers and therapeutic agents. Through a thorough analysis of current literature, we summarize key advancements in EV research and identify areas where further investigation is needed to fully understand their function in glaucoma.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"226 ","pages":"Article 108522"},"PeriodicalIF":1.5,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual discomfort and chromatic flickers","authors":"Sanae Yoshimoto , Hinako Iizuka , Tatsuto Takeuchi","doi":"10.1016/j.visres.2024.108520","DOIUrl":"10.1016/j.visres.2024.108520","url":null,"abstract":"<div><div>Flickering patterns that shift in chromaticity can be uncomfortable and may trigger epileptic seizures, though the underlying factors are not fully understood. In the spatial domain, chromatic contrast in images is a potential predictor of visual discomfort, with higher contrast generally leading to increased discomfort. This study investigated whether chromatic contrast between two flickering colors in a uniform field influences discomfort. Participants rated their subjective discomfort for various flickering color combinations defined by the CIE <em>L*a*b*</em> uniform color space. Overall, discomfort increased with both chromatic and brightness contrasts. Additionally, flickers containing highly saturated red generally caused greater discomfort compared to those without red, an effect not observed with low saturation. Our findings suggest that visual discomfort induced by time-varying chromatic patterns is partly influenced by chromatic contrast over time. Furthermore, unlike the spatial domain, discomfort in the temporal domain may be specifically associated with the hue of red.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"226 ","pages":"Article 108520"},"PeriodicalIF":1.5,"publicationDate":"2024-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142688434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2024-11-19DOI: 10.1016/j.visres.2024.108521
Hao Lou , Karin S. Pilz , Monicque M. Lorist
{"title":"Effects of cue location and object orientation on object-based attention","authors":"Hao Lou , Karin S. Pilz , Monicque M. Lorist","doi":"10.1016/j.visres.2024.108521","DOIUrl":"10.1016/j.visres.2024.108521","url":null,"abstract":"<div><div>Spatial cues have previously been found to facilitate information processing not only at cued locations but also within cued objects, so-called object-based attention. We used different variants of the classic two-rectangle paradigm to investigate the interaction of cue location and object orientation on object-based attentional effects. First, we re-analyzed data from a prior study using the classical two-rectangle paradigm. We expected faster attentional shifts along the horizontal compared to the vertical meridian. Results confirmed that cue location and rectangle orientation interactively influence object-based attention, with horizontal objects combined with upper left visual field cues eliciting faster responses than other conditions. In Experiment 2, we removed object contours to examine the benefits of shifting attention based purely on cue location. The results showed that these differences remained, indicating that attentional shifts are not solely guided by object contours. In Experiment 3, we added a third possible target location to the original two-rectangle experiment to examine whether attentional shifts followed a predictable pattern across the stimulus display. Despite faster responses to cued targets, no consistent and organized visual search pattern was observed when participants searched for targets at invalidly cued locations. Our findings suggest that object-based effects are influenced by both cue location and the orientation of attentional shifts. Shifts from left to right in the upper visual field consistently demonstrated significant benefits, whereas the benefits of vertical shifts were less consistent across experiments.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"226 ","pages":"Article 108521"},"PeriodicalIF":1.5,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142682869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}