Vision ResearchPub Date : 2025-07-26DOI: 10.1016/j.visres.2025.108664
Joycianne Rodrigues Parente , Eliza Maria da Costa Brito Lacerda , Dora Fix Ventura , Paulo Roney Kilpp Goulart , Natália B. Dutra , Givago Silva Souza , Letícia Miquilini
{"title":"Correlation between parameters estimated by the colour assessment and diagnosis and the Cambridge colour test in color discrimination evaluation","authors":"Joycianne Rodrigues Parente , Eliza Maria da Costa Brito Lacerda , Dora Fix Ventura , Paulo Roney Kilpp Goulart , Natália B. Dutra , Givago Silva Souza , Letícia Miquilini","doi":"10.1016/j.visres.2025.108664","DOIUrl":"10.1016/j.visres.2025.108664","url":null,"abstract":"<div><div>The Colour Assessment and Diagnosis (CAD) and the Cambridge Colour Test (CCT) are computerized psychophysical tests widely used in the diagnosis of color vision deficiencies due to their high specificity and sensitivity. However, these tests differ substantially in the type of visual task, stimulus configuration, vectors, luminance, background composition, and presentation time. This study aimed to compare the evaluation parameters estimated by these tests in trichromatic and dichromatic individuals. A total of sixty-six participants (40 trichromats and 38 dichromat individuals −16 protans and 22 deutans; mean age: 26.3 ± 8.9 years) were evaluated. Color discrimination thresholds were fitted to elliptical functions, and parameters such as ellipse area, rotation angle, and the size of protan, deutan, and tritan vectors were analyzed. Results showed equivalence between the tests for: the deutan and tritan vector areas and sizes in the trichromat subgroup; tritan vector area and size in the protan subgroup; and protan and tritan vector sizes in the deutan subgroup. Differences in the central coordinates of the CAD and CCT tests and the spatial arrangement of vectors in the CIE 1976 color space (specific to the CCT test) may have influenced the results. Nonetheless, agreement was observed in the measures of ellipse area, rotation angle, and sizes of the protan and tritan vectors between the two tests. These findings suggest that, despite methodological differences, the CAD and CCT tests produce largely comparable results and can be considered complementary tools in the assessment of color discrimination in clinical and research settings.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"235 ","pages":"Article 108664"},"PeriodicalIF":1.5,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144704383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature synergy enhances detection but not recognition of shape from texture cues","authors":"Cordula Hunt-Radej, Anna-Lena Schubert, Günter Meinhardt","doi":"10.1016/j.visres.2025.108660","DOIUrl":"10.1016/j.visres.2025.108660","url":null,"abstract":"<div><div>Texture regions that differ from their surroundings in more than one local feature are more easily detected. Recent findings show that a low-level summary statistic, net contrast energy, predicts this double-cue advantage, suggesting early-stage integration during image analysis. We investigated whether this advantage also applies to more complex, texture-defined shape discrimination beyond figure-ground segregation. Using both a figure detection task and a more demanding shape identification task, we calibrated <span><math><msup><mrow><mi>d</mi></mrow><mrow><mo>′</mo></mrow></msup></math></span> sensitivity to fixed baseline levels with single-cue targets defined by orientation or spatial frequency contrast. We then measured performance for double-cue targets at these baselines. Contrary to earlier results reported for simpler shape discriminations, we found a reduced double-cue advantage in the shape identification task. Specifically, double-cue sensitivity was notably lower than the algebraic sum of the single-cue sensitivities, a level achieved consistently in the detection task. Control tests with high feature contrast showed perfect detection performance for both single and combined cues. However, shape identification saturated at levels between <span><math><mrow><mn>83</mn><mtext>%–</mtext><mn>90</mn><mtext>%</mtext></mrow></math></span> accuracy, while gray-shaded figures yielded perfect performance, suggesting that unique shape representations could not be built from single or combined texture cues. These findings suggest that texture cue summation enhances texture segregation and segmentation but does not improve higher-level recognition of 2D texture shapes.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"235 ","pages":"Article 108660"},"PeriodicalIF":1.5,"publicationDate":"2025-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2025-07-17DOI: 10.1016/j.visres.2025.108662
Ailsa Humphries , Kyle R. Cave , Zhe Chen
{"title":"Attentional settings based on previous experience affect bias in visual comparisons","authors":"Ailsa Humphries , Kyle R. Cave , Zhe Chen","doi":"10.1016/j.visres.2025.108662","DOIUrl":"10.1016/j.visres.2025.108662","url":null,"abstract":"<div><div>Previous experiments in visual comparison have shown that the spatial congruency bias (SCB), a bias to categorise two targets as “same” if they occupy the same location in successive displays, and the overall bias (OB), the average bias across all trials, vary with visual similarity; and that the OB also varies with the type of task being performed. In four experiments, we explore whether these results are best explained by a visual similarity account, an attentional zoom account, or a combination of the two. Using a shape comparison task, we manipulated the visual similarity and predictability of the target displays by varying the local position of the target letter, either between different blocks (i.e., predictable; Experiments 1a and 2a) or within a block (i.e., unpredictable; Experiments 1b and 2b); additionally, we varied the distractor letters such that they were the same between the to-be-compared displays in most of the trials (Experiments 1a and 1b) or they were different in every trial (Experiments 2a and 2b). Under conditions of low interference, the predictability of visual information has no effect on the OB, or on the SCB, but this may be because attentional demands are low in these conditions. As predicted by the attentional zoom account, the SCB is influenced by predictability when peripheral interference is high. These results suggest that both similarity and attention play a role in visual comparisons.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"235 ","pages":"Article 108662"},"PeriodicalIF":1.5,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2025-07-12DOI: 10.1016/j.visres.2025.108657
Leyla Nur Turhal Çalışkan, Samet Çıklaçandır, Ömer Pars Kocaoğlu
{"title":"Accommodative forces in aging human eye","authors":"Leyla Nur Turhal Çalışkan, Samet Çıklaçandır, Ömer Pars Kocaoğlu","doi":"10.1016/j.visres.2025.108657","DOIUrl":"10.1016/j.visres.2025.108657","url":null,"abstract":"<div><div>Changes in the mechanical properties of the human crystalline lens over the years result in a loss of accommodation amplitude and eventually in presbyopia. While some material property changes of the aging human crystalline lens have been mapped, challenges remain in their in vivo characterization. Conflicting findings in the literature highlight the complexity of accurately defining lens biomechanics. Young’s modulus, anterior and posterior lens curvatures, lens thickness, and refractive index are examples of these well-studied properties. However, knowledge of forces applied to the crystalline lens for generating corresponding accommodative amplitudes has been limited to a few age groups. A full mapping of these accommodative forces over decades for the aging human eye remains incomplete. We used mechanical properties available in the literature to develop a mechanical model of the crystalline lens for age groups between 10 and 70 years. Then, finite element modeling and optical power calculations obtained from lens deformation during simulated accommodation were used to create a map of accommodative forces over the human lifespan. We found an S-curve-shaped decline in total equatorial forces required on the capsule to achieve reported accommodative amplitudes. This decline does not indicate increased lens compliance but reflects the possibility of age-related weakening of the applied force. The total force ranged from 0.5 N at age 10 to near zero at age 70, with a steep drop between ages 30 and 50.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"235 ","pages":"Article 108657"},"PeriodicalIF":1.5,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144604387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2025-07-09DOI: 10.1016/j.visres.2025.108659
Maureen D Plaumann , Wei Wei, Teng Leng Ooi
{"title":"Determining fixation accuracy with optical coherence tomography and its implication on visual acuity in amblyopia","authors":"Maureen D Plaumann , Wei Wei, Teng Leng Ooi","doi":"10.1016/j.visres.2025.108659","DOIUrl":"10.1016/j.visres.2025.108659","url":null,"abstract":"<div><div>Inaccurate fixation is a hallmark of strabismus and amblyopia. Recently, positional error of fixation in amblyopic children was assessed with Optical Coherence Tomography (OCT). This study extends the use of OCT to examine both positional error and stability of fixation in an adult population and investigates how lifelong impairment of fixation can impact visual acuity in amblyopia. Twenty macular cube scans per eye were acquired with the Cirrus HD-OCT in 30 amblyopes and 30 controls with normal binocular vision. The foveal location was identified with the instrument’s software as line scan coordinates to determine the distance between the fovea and the center of the scan. The average positional error and stability of fixation were calculated utilizing the foveal location measurements. Crowded monocular distance visual acuity (VA) was obtained from each eye. Amblyopic eyes demonstrated greater position error and fixation instability compared to fellow and control eyes. Simple linear regressions revealed a significant relationship between both position error and VA and fixation stability and VA. However, with multiple regression, position error alone was the significant predictor of VA. Fixation accuracy analysis from OCT imaging provides a quantitative assessment of fixation behavior, allowing for more comprehensive clinical management of amblyopia and predicting visual acuity.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"235 ","pages":"Article 108659"},"PeriodicalIF":1.5,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144581152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2025-07-02DOI: 10.1016/j.visres.2025.108653
Paul B. Hibbard , Jordi M. Asher , Rebecca L. Hornsey
{"title":"The contributions of pictorial, motion, and binocular cues to the perception of depth and distance","authors":"Paul B. Hibbard , Jordi M. Asher , Rebecca L. Hornsey","doi":"10.1016/j.visres.2025.108653","DOIUrl":"10.1016/j.visres.2025.108653","url":null,"abstract":"<div><div>Multiple visual cues are available for the estimation of distance. According to the modified weak fusion model, the information from these cues is combined through weighted averaging, with the weights determined by the relative reliability of each cue. Empirical tests of this model tend to isolate a small number of cues, in order for their reliabilities to be manipulated. Weights measured in this way are specific to the testing environment, and do not allow us to quantify the contributions of individual cues in natural viewing. To address this, we used estimates from the literature of sensitivity for a wide range of distance cues to predict the contribution of pictorial, binocular, and motion cues to relative distance. The cues assessed included convergence, accommodation, height in the field, texture density, relative size, height in the field, binocular disparity, and motion (assuming a walking observer). We used the modified weak fusion model to estimate the contribution of binocular, motion, and pictorial cues for distances between 2 and 100 m. These calculations provide estimates of the expected contributions of individual depth cues in everyday viewing conditions. In most cases, our results show a clear benefit for the weighted averaging of cues in the natural environment, in comparison with the use of the most reliable cue alone.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108653"},"PeriodicalIF":1.5,"publicationDate":"2025-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144524140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2025-06-27DOI: 10.1016/j.visres.2025.108658
F. Monier , L. Hertel , S. Droit-Volet , P. Chausse
{"title":"Ocular vergences measurement in virtual reality: A pilot study","authors":"F. Monier , L. Hertel , S. Droit-Volet , P. Chausse","doi":"10.1016/j.visres.2025.108658","DOIUrl":"10.1016/j.visres.2025.108658","url":null,"abstract":"<div><div>In this study, we investigated the value of using virtual reality to evaluate ocular vergence performance. We used a virtual reality device with an integrated eye-tracking system to create virtual environments that simulated far and near vision conditions and assessed ocular movements. We compared the maximum angular deviation compensated by the visual system or the vergence scores of the participants in the virtual environments with the vergence scores obtained with a prism in real environment, i.e. with the technique usually used for clinical assessments. We also compared a simple virtual environment with a complex virtual environment by creating landscapes. The vergence scores obtained for divergence and convergence with the virtual reality device were very similar to those obtained using prisms. This suggests that the virtual environments efficiently stimulated vision conditions in 3 dimensions. Our results also support the idea that modulating the angular deviation of the projected image in the virtual reality headset is a satisfactory way of inducing ocular vergences. The amplitudes of fusion were better in the virtual conditions, suggesting that the controlled virtual environments provided better conditions for measuring vergence movements. Furthermore, the virtual reality device induced a better amplitude of fusion in participants with high convergence abilities by preventing the underestimation of divergence abilities in these participants. This last result suggests that this type of virtual reality mechanism could be helpful in the future for remediating vergences-related disorders.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108658"},"PeriodicalIF":1.5,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144502321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2025-06-26DOI: 10.1016/j.visres.2025.108651
Maria Kon , Gregory Francis
{"title":"Grouping strategies in induced perceptual grouping","authors":"Maria Kon , Gregory Francis","doi":"10.1016/j.visres.2025.108651","DOIUrl":"10.1016/j.visres.2025.108651","url":null,"abstract":"<div><div>Induced grouping refers to the influence of a perceived group of elements on the grouping of another set of elements that cannot be explained by other grouping principles. Vickery (2008) first highlighted this phenomenon and, despite convincing demonstrations of this principle, seems to be the only direct study. Here we report two successful large sample replications of one of Vickery’s experiments. We also explain Vickery’s results with a cortical model of visual grouping and selection. We extended a previous model, so that it performs a feature-based search of an image for a target. We show that induced grouping effects are the result of a connection strategy that links together target pairs in a visual search task combined with a selection strategy that tends to place a selection signal at locations close to the target pair features. These strategies interact because the connection strategy that links target pairs also sometimes links inducing elements, thereby influencing the selection signal location. The model extension plays a key role in explaining this phenomenon and enables the model to simulate other tasks, like visual search, where the observer uses a dynamic and feature-guided selection process.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108651"},"PeriodicalIF":1.5,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144491729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Including the nonlinear response of neurons to improve the prediction of visual acuity across levels of contrast, luminance, and blur","authors":"Charles-Edouard Leroux, Christophe Fontvieille, Fabrice Bardin","doi":"10.1016/j.visres.2025.108652","DOIUrl":"10.1016/j.visres.2025.108652","url":null,"abstract":"<div><div>We present a theoretical model that predicts visual acuity changes over extended ranges of stimulus contrast, luminance, and optical blur. We highlight the significance of neuronal response nonlinearity to optical contrast in achieving model agreement with experimental data. The model operates by computing, for each experimental condition, a parameter termed <em>data separability</em> within the framework of statistical decision theory. We assume a theoretical model observer that utilizes sharp image templates for optotype identification, consistent with our previous work for small (<span><math><mrow><mo><</mo><mn>0</mn><mo>.</mo><mn>5</mn></mrow></math></span> D) optical aberrations (Leroux et al., 2024). The model incorporates the nonlinear response of visual neurons to contrast stimuli in the simulation of visual images. We digitalized measurements from Johnson and Casson (1995), who studied the combined effects of stimulus contrast (6 to 97%), luminance (0.075 to 75 cd/m<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>), and blur (0 to 8 D positive lens), and compared our model’s predictions to their data. The model achieved an overall root-mean-square residual of 0.048 logMAR for measurements spanning 1.73 logMAR. Accounting for nonlinearity proved critical in predicting acuity across these extended ranges of experimental conditions. This approach may also be necessary for modeling acuity under non-standard experimental conditions and/or for subjects with pathologies.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108652"},"PeriodicalIF":1.5,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144351533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision ResearchPub Date : 2025-06-24DOI: 10.1016/j.visres.2025.108654
Anna L. Vlasits
{"title":"Receptive fields of retinal neurons: New themes and variations","authors":"Anna L. Vlasits","doi":"10.1016/j.visres.2025.108654","DOIUrl":"10.1016/j.visres.2025.108654","url":null,"abstract":"<div><div>Receptive fields have long been central to understanding signal processing in the visual system. Initially defined as the region of visual space that influences a given neuron’s activity, receptive fields are now recognized to encompass additional dimensions such as time and color. This multidimensional representation provides a window into how visual neurons filter incoming stimuli. In the retina, receptive fields emerge from neuronal processing by a multi-layered circuit. Recent research on temporal, chromatic, and adaptive processing in the retina has revealed more complex receptive fields than were initially recognized. This review emphasizes new research on receptive fields in the retina and highlights approaches that promise to expand our understanding of retinal receptive fields.</div></div>","PeriodicalId":23670,"journal":{"name":"Vision Research","volume":"234 ","pages":"Article 108654"},"PeriodicalIF":1.5,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144351532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}