{"title":"Corrections to: The contribution of luminance and chromatic channels to color assimilation.","authors":"","doi":"10.1167/jov.25.4.2","DOIUrl":"10.1167/jov.25.4.2","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 4","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11977790/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maria Villamil, Allie C Schneider, Jiahe Cui, Laura K Young, Hannah E Smithson
{"title":"Contributed Talks I: Detecting and characterising microsaccades from AOSLO images of the photoreceptor mosaic using computer vision.","authors":"Maria Villamil, Allie C Schneider, Jiahe Cui, Laura K Young, Hannah E Smithson","doi":"10.1167/jov.25.5.5","DOIUrl":"https://doi.org/10.1167/jov.25.5.5","url":null,"abstract":"<p><p>Fixational eye movements (FEMs), especially microsaccades (MS), are promising biomarkers of neurodegenerative disease. In vivo images of the photoreceptor mosaic acquired using an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) are systematically distorted by eye motion. Most methods to extract FEMs from AOSLO data rely on comparison to a motion-free reference, giving eye-position as a function of time. MS are subsequently identified using adaptive velocity thresholds (Engbert & Kliegl, 2003). We use computer vision and machine learning (ML) for detection and characterisation of MS directly from raw AOSLO images. For training and validation, we use Emulated Retinal Image CApture (ERICA), an open-source tool to generate synthetic AOSLO datasets of retinal images and ground-truth velocity profiles (Young & Smithson, 2021). To classify regions of AOSLO images that contain a MS, images were divided into a grid of 32-by-32-pixel sub-images. Predictions from rows of sub-images aligned with the fast-scan of the AOSLO were combined, giving 1ms resolution. Model performance was high (F1 scores >0.92) across plausible MS displacement magnitudes and angles, with most errors close to the velocity threshold for classification. Direct velocity predictions were also derived from regression ML models. We show that ML models can be systematically adapted for generalisation to real in vivo images, allowing characterisation of MS at much finer spatial scales than video-based eye-trackers.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144039692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The fate of visual working memory items after their job is done.","authors":"Zachary Hamblin-Frohman, Jay Pratt","doi":"10.1167/jov.25.4.7","DOIUrl":"https://doi.org/10.1167/jov.25.4.7","url":null,"abstract":"<p><p>Visual working memory is a competitive, capacity-limited system for the storage of feature and object-based information. In change-detection tasks, items are encoded into memory and, after a retention period, are compared against a test set. Loss of information can occur from attentional interference or prioritizing some items over others. But what happens to the memory representations after the change-detection task is completed? The current article examines the fate of a memory item after its behavioral purpose has been fulfilled. Participants encoded a single item in memory for a difficult change-detection task. Visual search trials were presented both before and after the memory test was completed. Singleton distractors were present in these search trials that could match or not the memory item. In Experiment 1, memory-driven capture (the memory-matching distractors led to longer search response times than the unrelated distractor) was observed in the pre-memory test and, in a weaker form, the post-test search trials. In Experiment 2, we introduced cues that indicated the memory test would not occur on a subset of trials, controlling for re-exposure to the memory stimulus. Memory-driven capture was again observed for these post-cue search trials, but only at a short time interval, at a longer interval this effect was attenuated. These results suggest that the memory representations only linger briefly in the visual system.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 4","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12011125/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lauren E Welbourne, Joel T Martin, Federico Segala, Annie Morsi, Alex Carter, Alex R Wade, Daniel H Baker
{"title":"Poster Session: Melanopsin modulation of cortical S-cone responses.","authors":"Lauren E Welbourne, Joel T Martin, Federico Segala, Annie Morsi, Alex Carter, Alex R Wade, Daniel H Baker","doi":"10.1167/jov.25.5.22","DOIUrl":"https://doi.org/10.1167/jov.25.5.22","url":null,"abstract":"<p><p>Melanopsin is a non-image forming light sensitive retinal photopigment. Melanopsin activation takes longer to peak and has a prolonged response relative to cone photoreceptors. Recent evidence suggests that melanopsin-driven signals may influence vision (through cone photoreceptor modulation), but it is unclear whether melanopsin can directly stimulate visual cortex (e.g. V1), in addition to subcortical pathways. Our lab recently observed unusual fMRI time course responses in V1 for S-cone isolating stimuli, where the response was maintained for the duration of the stimulus 'off' period - it did not return to baseline after stimulus offset (at 12 seconds). We hypothesised that this was due to an effect of lingering melanopsin activation, which was activated by the S-cone isolating stimuli because we did not explicitly silence melanopsin in that study. In the present study, we used a custom-made multi-primary LED system, to create S-cone isolating stimuli that either activated or silenced melanopsin. Stimuli were presented in a block design, 15s ON / 30s OFF to allow time for a sustained response to return to baseline between conditions. Here we present evidence from 11 participants of melanopsin-driven responses in cortical area V1 - where the S-cone melanopsin-active condition showed a larger response after stimulus offset than the S-cone melanopsin-silenced condition.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"22"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144054830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Invited Session IV: The visual ecology of colour and light: How does melanopsin help us to see?","authors":"Annette Allen","doi":"10.1167/jov.25.5.53","DOIUrl":"https://doi.org/10.1167/jov.25.5.53","url":null,"abstract":"<p><p>Environmental light intensity (irradiance) is a powerful regulator of physiology and behaviour. A stable neuronal representation of light intensity is grounded in a specialised retinal output channel, found in humans and other mammals, and arising from intrinsically photosensitive retinal ganglion cells (ipRGCs). These are a rare class of retinal ganglion cells with autonomous sensitivity to light, thanks to their expression of the photopigment melanopsin. Melanopsin photoreception is optimised to encode low-frequency changes in the light environment and, as a result, extends the temporal and spatial range over which light is detected by the retina. ipRGCs innervate many brain areas, and this allows melanopsin light responses to be used for diverse purposes, ranging from the synchronization of the circadian clock with the solar day to light's regulation of mood, alertness, and neuroendocrine and cognitive functions. There is now also abundant evidence that ipRGCs also make an important contribution to the processes of perceptual vision, via their projection to the visual thalamus. Here I will discuss ongoing research exploring how melanopsin extends the spatial and temporal range over which light is detected by the retina, and the role this plays in augmenting the detection of patterns in brightness.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"53"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144038424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abbie Lawton, Richard Aveyard, Alex Wade, Ben Clayden, Stephen Robinson
{"title":"Poster Session: Using OPM-MEG to study the timecourse of human contrast discrimination.","authors":"Abbie Lawton, Richard Aveyard, Alex Wade, Ben Clayden, Stephen Robinson","doi":"10.1167/jov.25.5.15","DOIUrl":"https://doi.org/10.1167/jov.25.5.15","url":null,"abstract":"<p><p>The International Brain Laboratory (IBL) is a large-scale project collecting multiunit measurements from the mouse brain during a simple 2AFC perceptual decision task. The goal is to characterise the flow of information across the brain from sensory input areas through to motor outputs, and the way that this information flow can be modulated by priors. Our lab is translating the IBL task to humans using a combination of psychophysics and neuroimaging. Here we describe the results from a pilot study using a novel type of neuroimaging (OPM-MEG). We first describe the adaptations necessary to alter the original rodent task to make it appropriate for human subjects. We then present behavioral and neuroimaging data obtained using this modified paradigm. Human psychophysical responses recapitulate key features of the rodent behavioural data - including the effect of perceptual priors or 'bias'. Psychophysical response functions have the same form and bias dependency as those obtained from mice. Using the MEG data we are able to decode key features of the IBL paradigm including visual stimuli, responses, bias blocks and feedback in a time-resolved manner. We show that OPM-MEG responses are consistent with fMRI responses obtained in our lab using the same paradigm. We conclude that multimodal neuroimaging techniques (OPM-MEG and fMRI) can be applied to the IBL task allowing us to relate neuronal-level recordings in rodents with whole-brain population responses in humans.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jenna Grieshop, Emma Warr, Ashleigh Walesa, Katherine Hemsworth, Joseph Carroll
{"title":"Poster Session: Deviation mapping for foveal cone mosaic topography.","authors":"Jenna Grieshop, Emma Warr, Ashleigh Walesa, Katherine Hemsworth, Joseph Carroll","doi":"10.1167/jov.25.5.27","DOIUrl":"https://doi.org/10.1167/jov.25.5.27","url":null,"abstract":"<p><p>Deviation mapping is commonly used across retinal imaging modalities. Here we compiled data from two labs (UC Berkley[1] & MCW) to create an AOSLO-specific deviation mapping tool for measures of the foveal cone mosaic. Foveal cones were identified for 87 normative regions of interest (ROIs) (26M, 61F; 13-67 yrs, median=26 yrs) and for 5 pathological ROIs (2 Bornholm Eye Disease, 3 Albinism; 1M, 4F; 16-50 yrs, median=42 yrs). ROIs were cropped and resized to a common scale for comparison. Density and nearest neighbor distance (NND) maps were generated for each ROI, and the cone density centroid[2] (CDC) was determined for each map. Normative maps were aligned using these CDC locations, and average and standard deviation (SD) maps were created for both density and NND. Pathology maps were compared to these normative composite maps. At the CDC, average (SD) density was 1.79E+5 (2.55E+4) cones/mm^2 and average (SD) NND was 2.08 (0.16) µm. For pathological ROIs, the percentage of pixels within 1 SD of the normative data was comparable for density and NND except in two individuals where density was more deviant than NND (consistent with mosaic irregularity and/or random cone loss). Deviation mapping applied to foveal AOSLO data can be used to assess the normality of individual foveal ROIs. Comparing deviation maps across different metrics may provide valuable insight into the underlying properties of the cone mosaic in various retinal pathologies. 1) PMID:31348002 2) PMID:34343479.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"27"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144013257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ian Pennock, John Maule, Chris Racey, Teresa Tang, Yasmin Richter, Chris Bird, Jenny M Bosten, Anna Franklin
{"title":"Contributed talks II: Colour-selective regions of visual cortex are responsive to the colour statistics of objects.","authors":"Ian Pennock, John Maule, Chris Racey, Teresa Tang, Yasmin Richter, Chris Bird, Jenny M Bosten, Anna Franklin","doi":"10.1167/jov.25.5.38","DOIUrl":"https://doi.org/10.1167/jov.25.5.38","url":null,"abstract":"<p><p>It has been suggested that objects are more likely to be warmer in colour, redder and more saturated than the background. Here, we investigate the colour statistics of objects, and the brain regions that are responsive to these statistics. First, we analysed the Natural Scenes Dataset (NSD), a 7T dataset in which 8 participants viewed up to 10,000 natural scenes. Our analysis of the chromaticities of the 80 segmented object classes and backgrounds confirmed that object pixels were warmer, redder, more saturated and darker than background pixels. The probability that pixels were from objects rather than backgrounds (the 'Object Colour Probability', OCP) was calculated for 240 hue bins. The mean OCP of images correlated with NSD BOLD responses mostly in the ventral visual pathway. Other image statistics (e.g., number of food pixels) better explained the responses of correlated voxels. A second fMRI study, in which colours were shown as a single patch on a grey background, was analysed to study whether ventral visual pathway is responsive to OCP in the absence of other scene statistics. To constrain our analyses to functionally relevant areas, we used independent functional localizers to identify colour- and object-selective areas and combined these with NSD defined OCP responsive areas. The OCP of the colour patches significantly correlated with BOLD in colour-selective but not object-selective visual regions. Implications for the role of colour in object vision are discussed.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"38"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144006117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Invited Session I: Focusing on the Human Fovea: Active vision at the foveolar scale: Insights from fixational oculomotor behavior and retinal anatomy.","authors":"Martina Poletti","doi":"10.1167/jov.25.5.3","DOIUrl":"https://doi.org/10.1167/jov.25.5.3","url":null,"abstract":"<p><p>Vision is an active process even at its finest scale in the 1-deg foveola, the visual system is primarily sensitive to changes in the visual input and it has been shown that fixational eye movements reformat the spatiotemporal flow to the retina in a way that is optimal for fine spatial vision. Using high-precision eye-tracking coupled with a system for gaze-contingent display capable of localizing the line of sight with arcminute precision, and an Adaptive Optics Scanning Laser Ophthalmoscope (AOSLO) for high-resolution retinal imaging enabling retinal-contingent manipulations of the visual input, our results show that the need for active foveolar vision also stems from the non-uniformity of fine spatial vision across this region. Further, we show that the visual system is highly sensitive even to a small sub-foveolar loss of vision and fixation behavior is readjusted to compensate for this loss. Overall, the emerging picture is that of a highly non-homogenous foveolar vision characterized by a refined level of control of attention and fixational eye movements at this scale.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144004852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contributed Talks III: Pulse trains to percepts: A virtual patient describing the perceptual effects of human visual cortical stimulation.","authors":"Ione Fine, Geoffrey Matthews Boynton","doi":"10.1167/jov.25.5.59","DOIUrl":"https://doi.org/10.1167/jov.25.5.59","url":null,"abstract":"<p><p>Here we describe how computational models or 'virtual patients', based on the neurophysiological architecture of V1, can be used to predict the perceptual experience of cortical implant patients. Our virtual patient model can successfully describe psychophysical data from a wide range of previously published studies describing the location, size, brightness and spatiotemporal shape of electrically induced percepts in humans. Our simulations suggest that, in the foreseeable future, the perceptual quality of cortical prosthetic devices is likely to be limited by the neurophysiological organization of the visual cortex, rather than the size and spacing of electrodes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 5","pages":"59"},"PeriodicalIF":2.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}