{"title":"Salience maps for judgments of frontal plane distance, centroids, numerosity, and letter identity inferred from substance-invariant processing.","authors":"Lingyu Gan, George Sperling","doi":"10.1167/jov.25.1.8","DOIUrl":"10.1167/jov.25.1.8","url":null,"abstract":"<p><p>A salience map is a topographic map that has inputs at each x,y location from many different feature maps and summarizes the combined salience of all those inputs as a real number, salience, which is represented in the map. Of the more than 1 million Google references to salience maps, nearly all use the map for computing the relative priority of visual image components for subsequent processing. We observe that salience processing is an instance of substance-invariant processing, analogous to household measuring cups, weight scales, and measuring tapes, all of which make single-number substance-invariant measurements. Like these devices, the brain also collects material for substance-invariant measurements but by a different mechanism: salience maps that collect visual substances for subsequent measurement. Each salience map can be used by many different measurements. The instruction to attend is implemented by increasing the salience of the to-be-attended items so they can be collected in a salience map and then further processed. Here we show that, beyond processing priority, the following measurement tasks are substance invariant and therefore use salience maps: computing distance in the frontal plane, computing centroids (center of a cluster of items), computing the numerosity of a collection of items, and identifying alphabetic letters. We painstakingly demonstrate that defining items exclusively by color or texture not only is sufficient for these tasks, but that light-dark luminance information significantly improves performance only for letter recognition. Obviously, visual features are represented in the brain but their salience alone is sufficient for these four judgments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724370/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monocular eye-cueing shifts eye balance in amblyopia.","authors":"Sandy P Wong, Robert F Hess, Kathy T Mullen","doi":"10.1167/jov.25.1.6","DOIUrl":"10.1167/jov.25.1.6","url":null,"abstract":"<p><p>Here, we investigate the shift in eye balance in response to monocular cueing in adults with amblyopia. In normally sighted adults, biasing attention toward one eye, by presenting a monocular visual stimulus to it, can shift eye balance toward the stimulated eye, as measured by binocular rivalry. We investigated whether we can modulate eye balance by directing monocular stimulation/attention in adults with clinical binocular deficits associated with amblyopia and larger eye imbalances. In a dual-task paradigm, eight participants continuously reported ongoing rivalry percepts and simultaneously performed a task related to the cueing stimulus. Time series of eye balance dynamics, aligned to cue onset, are averaged across trials and participants. In different time series, we tested the effect of monocular cueing on the amblyopic and fellow eyes (compared to a binocular control condition) and the effect of an active versus passive task. Overall, we found a significant shift in eye balance toward the monocularly cued eye, when both the fellow eye or the amblyopic eye were cued, F(2, 14) = 27.649, p < 0.01, ω2 = 0.590. This was independent of whether, during the binocular rivalry, the cue stimulus was presented to the perceiving eye or the non-perceiving eye. Performing an active task tended to produce a larger eye balance change, but this effect did not reach significance. Our results suggest that the eye imbalance in adults with binocular deficits, such as amblyopia, can be transiently reduced by monocularly directed stimulation, at least through activation of bottom-up attentional processes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11724371/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Visual information shows dominance in determining the magnitude of intentional binding for audiovisual outcomes.","authors":"De-Wei Dai, Po-Jang Brown Hsieh","doi":"10.1167/jov.25.1.7","DOIUrl":"10.1167/jov.25.1.7","url":null,"abstract":"<p><p>Intentional binding (IB) refers to the compression of subjective timing between a voluntary action and its outcome. In this study, we investigate the IB of a multimodal (audiovisual) outcome. We used a modified Libet clock while depicting a dynamic physical event (collision). Experiment 1 examined whether IB for the unimodal (auditory) event could be generalized to the multimodal (audiovisual) event, compared their magnitudes, and assessed whether the level of integration between modalities could affect IB. Planned contrasts (n = 42) showed significant IB effects for all types of events; the magnitude of IB was significantly weaker in both audiovisual integrated and audiovisual irrelevant conditions compared with auditory, with no difference between the integrated and irrelevant conditions. Experiment 2 separated the components of the audiovisual event to test the appropriate model describing the magnitude of IB in multimodal contexts. Planned contrasts (n = 42) showed the magnitude of IB was significantly weaker in both the audiovisual and visual conditions compared with the auditory condition, with no difference between the audiovisual and visual conditions. Additional Bayesian analysis provided moderate evidence supporting the equivalence between the two conditions. In conclusion, this study demonstrated that the IB phenomenon can be generalized to multimodal (audiovisual) sensory outcomes, and visual information shows dominance in determining the magnitude of IB for audiovisual events.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11721482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving the reliability and accuracy of population receptive field measures using a logarithmically warped stimulus.","authors":"Kelly Chang, Ione Fine, Geoffrey M Boynton","doi":"10.1167/jov.25.1.5","DOIUrl":"10.1167/jov.25.1.5","url":null,"abstract":"<p><p>The population receptive field (pRF) method, which measures the region in visual space that elicits a blood-oxygen-level-dependent (BOLD) signal in a voxel in retinotopic cortex, is a powerful tool for investigating the functional organization of human visual cortex with fMRI (Dumoulin & Wandell, 2008). However, recent work has shown that pRF estimates for early retinotopic visual areas can be biased and unreliable, especially for voxels representing the fovea. Here, we show that a log-bar stimulus that is logarithmically warped along the eccentricity dimension produces more reliable estimates of pRF size and location than the traditional moving bar stimulus. The log-bar stimulus was better able to identify pRFs near the foveal representation, and pRFs were smaller in size, consistent with simulation estimates of receptive field sizes in the fovea.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jason F Rubinstein, Noelia Gabriela Alcalde, Adrien Chopin, Preeti Verghese
{"title":"Oculomotor challenges in macular degeneration impact motion extrapolation.","authors":"Jason F Rubinstein, Noelia Gabriela Alcalde, Adrien Chopin, Preeti Verghese","doi":"10.1167/jov.25.1.17","DOIUrl":"https://doi.org/10.1167/jov.25.1.17","url":null,"abstract":"<p><p>Macular degeneration (MD), which affects the central visual field including the fovea, has a profound impact on acuity and oculomotor control. We used a motion extrapolation task to investigate the contribution of various factors that potentially impact motion estimation, including the transient disappearance of the target into the scotoma, increased position uncertainty associated with eccentric target positions, and increased oculomotor noise due to the use of a non-foveal locus for fixation and for eye movements. Observers performed a perceptual baseball task where they judged whether the target would intersect or miss a rectangular region (the plate). The target was extinguished before reaching the plate and participants were instructed either to fixate a marker or smoothly track the target before making the judgment. We tested nine eyes of six participants with MD and four control observers with simulated scotomata that matched those of individual participants with MD. Both groups used their habitual oculomotor locus-eccentric preferred retinal locus (PRL) for MD and fovea for controls. In the fixation condition, motion extrapolation was less accurate for controls with simulated scotomata than without, indicating that occlusion by the scotoma impacted the task. In both the fixation and pursuit conditions, MD participants with eccentric preferred retinal loci typically had worse motion extrapolation than controls with a matched artificial scotoma and foveal preferred retinal loci. Statistical analysis revealed occlusion and target eccentricity significantly impacted motion extrapolation in the pursuit condition, indicating that these factors make it challenging to estimate and track the path of a moving target in MD.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"17"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesús Malo, José Juan Esteve-Taboada, Guillermo Aguilar, Marianne Maertens, Felix A Wichmann
{"title":"Estimating the contribution of early and late noise in vision from psychophysical data.","authors":"Jesús Malo, José Juan Esteve-Taboada, Guillermo Aguilar, Marianne Maertens, Felix A Wichmann","doi":"10.1167/jov.25.1.12","DOIUrl":"10.1167/jov.25.1.12","url":null,"abstract":"<p><p>Human performance in psychophysical detection and discrimination tasks is limited by inner noise. It is unclear to what extent this inner noise arises from early noise (e.g., in the photoreceptors) or from late noise (at or immediately prior to the decision stage, presumably in cortex). Very likely, the behaviorally limiting inner noise is a nontrivial combination of both early and late noise. Here we propose a method to quantify the contributions of early and late noise purely from psychophysical data. Our approach generalizes classical results for linear systems by combining the theory of noise propagation through a nonlinear network with expressions to obtain a perceptual metric through a nonlinear network. We show that from threshold-only data, the relative contributions of early and late noise can only be disentangled when the experiments include substantial external noise. When full psychometric functions are available, early and late noise sources can be quantified even in the absence of external noise. Our psychophysical estimate of the magnitude of early noise-assuming a standard cascade of linear and nonlinear model stages-is substantially lower than the noise in cone photocurrents computed via an accurate model of retinal physiology, the ISETBio. This is consistent with the idea that one of the fundamental tasks of early vision is to reduce the comparatively large retinal noise.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11758886/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparative analysis of perceptual noise in lateral and depth motion: Evidence from eye tracking.","authors":"Joan López-Moliner","doi":"10.1167/jov.25.1.15","DOIUrl":"10.1167/jov.25.1.15","url":null,"abstract":"<p><p>The characterization of how precisely we perceive visual speed has traditionally relied on psychophysical judgments in discrimination tasks. Such tasks are often considered laborious and susceptible to biases, particularly without the involvement of highly trained participants. Additionally, thresholds for motion-in-depth perception are frequently reported as higher compared to lateral motion, a discrepancy that contrasts with everyday visuomotor tasks. In this research, we rely on a smooth pursuit model, based on a Kalman filter, to quantify speed observational uncertainties. This model allows us to distinguish between additive and multiplicative noise across three conditions of motion dynamics within a virtual reality setting: random walk, linear motion, and nonlinear motion, incorporating both lateral and depth motion components. We aim to assess tracking performance and perceptual uncertainties for lateral versus motion-in-depth. In alignment with prior research, our results indicate diminished performance for depth motion in the random walk condition, characterized by unpredictable positioning. However, when velocity information is available and facilitates predictions of future positions, perceptual uncertainties become more consistent between lateral and in-depth motion. This consistency is particularly noticeable within ranges where retinal speeds overlap between these two dimensions. Significantly, additive noise emerges as the primary source of uncertainty, largely exceeding multiplicative noise. This predominance of additive noise is consistent with computational accounts of visual motion. Our study challenges earlier beliefs of marked differences in processing lateral versus in-depth motions, suggesting similar levels of perceptual uncertainty and underscoring the significant role of additive noise.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 1","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2025-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11761139/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143034744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Serial dependencies in motor targeting as a function of target appearance.","authors":"Sandra Tyralla, Eckart Zimmermann","doi":"10.1167/jov.24.13.6","DOIUrl":"10.1167/jov.24.13.6","url":null,"abstract":"<p><p>In order to bring stimuli of interest into our central field of vision, we perform saccadic eye movements. After every saccade, the error between the predicted and actual landing position is monitored. In the laboratory, artificial post-saccadic errors are created by displacing the target during saccade execution. Previous research found that even a single post-saccadic error induces immediate amplitude changes to minimize that error. The saccadic amplitude adjustment could result from a recalibration of the saccade target representation. We asked if recalibration follows an integration scheme in which the impact magnitude of the previous post-saccadic target location depends on the certainty of the current target. We asked subjects to perform saccades to Gaussian blobs as targets, the visuospatial certainty of which we manipulated by changing its spatial constant. In separate sessions, either the pre-saccadic or post-saccadic target was uncertain. Additionally, we manipulated the contrast to further decrease certainty, changing the spatial constant mid-saccade. We found saccade-by-saccade amplitude reductions only with a currently uncertain target, a previously certain one, and a constant target contrast. We conclude that the features of the pre-saccadic target (i.e., size and contrast) determine the extent to which post-saccadic error shapes upcoming saccade amplitudes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11629911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Puneeth N Chakravarthula, Ansh K Soni, Miguel P Eckstein
{"title":"Preferred fixation position and gaze location: Two factors modulating the composite face effect.","authors":"Puneeth N Chakravarthula, Ansh K Soni, Miguel P Eckstein","doi":"10.1167/jov.24.13.15","DOIUrl":"10.1167/jov.24.13.15","url":null,"abstract":"<p><p>Humans consistently land their first saccade to a face at a preferred fixation location (PFL). Humans also typically process faces as wholes, as evidenced by perceptual effects such as the composite face effect (CFE). However, not known is whether an individual's tendency to process faces as wholes varies with their gaze patterns on the face. Here, we investigated variation of the CFE with the PFL. We compared the strength of the CFE for two groups of observers who were screened to have their PFLs either higher up, closer to the eyes, or lower on the face, closer to the tip of the nose. During the task, observers maintained their gaze at either their own group's mean PFL or at the other group's mean PFL. We found that the top half of the face elicits a stronger CFE than the bottom half. Further, the strength of the CFE was modulated by the distance of the PFL from the eyes, such that individuals with a PFL closer to the eyes had a stronger CFE than those with a PFL closer to the mouth. Finally, the top-half CFE for both upper-lookers and lower-lookers was abolished when they fixated at a non-preferred location on the face. Our findings show that the CFE relies on internal face representations shaped by the long-term use of a consistent oculomotor strategy to view faces.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11681917/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reviewers.","authors":"","doi":"10.1167/jov.24.13.16","DOIUrl":"10.1167/jov.24.13.16","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 13","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11684487/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142957898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}