Zoe M Boundy-Singer, Corey M Ziemba, Olivier J Hénaff, Robbe L T Goris
{"title":"How does V1 population activity inform perceptual certainty?","authors":"Zoe M Boundy-Singer, Corey M Ziemba, Olivier J Hénaff, Robbe L T Goris","doi":"10.1167/jov.24.6.12","DOIUrl":"10.1167/jov.24.6.12","url":null,"abstract":"<p><p>Neural population activity in sensory cortex informs our perceptual interpretation of the environment. Oftentimes, this population activity will support multiple alternative interpretations. The larger the spread of probability over different alternatives, the more uncertain the selected perceptual interpretation. We test the hypothesis that the reliability of perceptual interpretations can be revealed through simple transformations of sensory population activity. We recorded V1 population activity in fixating macaques while presenting oriented stimuli under different levels of nuisance variability and signal strength. We developed a decoding procedure to infer from V1 activity the most likely stimulus orientation as well as the certainty of this estimate. Our analysis shows that response magnitude, response dispersion, and variability in response gain all offer useful proxies for orientation certainty. Of these three metrics, the last one has the strongest association with the decoder's uncertainty estimates. These results clarify that the nature of neural population activity in sensory cortex provides downstream circuits with multiple options to assess the reliability of perceptual interpretations.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"12"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11185272/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141332372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pamela Villavicencio, Cristina de la Malla, Joan López-Moliner
{"title":"Prediction of time to contact under perceptual and contextual uncertainties.","authors":"Pamela Villavicencio, Cristina de la Malla, Joan López-Moliner","doi":"10.1167/jov.24.6.14","DOIUrl":"10.1167/jov.24.6.14","url":null,"abstract":"<p><p>Accurately estimating time to contact (TTC) is crucial for successful interactions with moving objects, yet it is challenging under conditions of sensory and contextual uncertainty, such as occlusion. In this study, participants engaged in a prediction motion task, monitoring a target that moved rightward and an occluder. The participants' task was to press a key when they predicted the target would be aligned with the occluder's right edge. We manipulated sensory uncertainty by varying the visible and occluded periods of the target, thereby modulating the time available to integrate sensory information and the duration over which motion must be extrapolated. Additionally, contextual uncertainty was manipulated by having a predictable and unpredictable condition, meaning the occluder either reliably indicated where the moving target would disappear or provided no such indication. Results showed differences in accuracy between the predictable and unpredictable occluder conditions, with different eye movement patterns in each case. Importantly, the ratio of the time the target was visible, which allows for the integration of sensory information, to the occlusion time, which determines perceptual uncertainty, was a key factor in determining performance. This ratio is central to our proposed model, which provides a robust framework for understanding and predicting human performance in dynamic environments with varying degrees of uncertainty.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"14"},"PeriodicalIF":2.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11204063/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141433199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Transfer of visual perceptual learning over a task-irrelevant feature through feature-invariant representations: Behavioral experiments and model simulations.","authors":"Jiajuan Liu, Zhong-Lin Lu, Barbara Dosher","doi":"10.1167/jov.24.6.17","DOIUrl":"10.1167/jov.24.6.17","url":null,"abstract":"<p><p>A large body of literature has examined specificity and transfer of perceptual learning, suggesting a complex picture. Here, we distinguish between transfer over variations in a \"task-relevant\" feature (e.g., transfer of a learned orientation task to a different reference orientation) and transfer over a \"task-irrelevant\" feature (e.g., transfer of a learned orientation task to a different retinal location or different spatial frequency), and we focus on the mechanism for the latter. Experimentally, we assessed whether learning a judgment of one feature (such as orientation) using one value of an irrelevant feature (e.g., spatial frequency) transfers to another value of the irrelevant feature. Experiment 1 examined whether learning in eight-alternative orientation identification with one or multiple spatial frequencies transfers to stimuli at five different spatial frequencies. Experiment 2 paralleled Experiment 1, examining whether learning in eight-alternative spatial-frequency identification at one or multiple orientations transfers to stimuli with five different orientations. Training the orientation task with a single spatial frequency transferred widely to all other spatial frequencies, with a tendency to specificity when training with the highest spatial frequency. Training the spatial frequency task fully transferred across all orientations. Computationally, we extended the identification integrated reweighting theory (I-IRT) to account for the transfer data (Dosher, Liu, & Lu, 2023; Liu, Dosher, & Lu, 2023). Just as location-invariant representations in the original IRT explain transfer over retinal locations, incorporating feature-invariant representations effectively accounted for the observed transfer. Taken together, we suggest that feature-invariant representations can account for transfer of learning over a \"task-irrelevant\" feature.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"17"},"PeriodicalIF":2.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11205231/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141447409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sherry Zhang, Jack Morrison, Thomas Sun, Daniel R Kowal, Ernest Greene
{"title":"Evaluating integration of letter fragments through contrast and spatially targeted masking.","authors":"Sherry Zhang, Jack Morrison, Thomas Sun, Daniel R Kowal, Ernest Greene","doi":"10.1167/jov.24.6.9","DOIUrl":"10.1167/jov.24.6.9","url":null,"abstract":"<p><p>Four experiments were conducted to gain a better understanding of the visual mechanisms related to how integration of partial shape cues provides for recognition of the full shape. In each experiment, letters formed as outline contours were displayed as a sequence of adjacent segments (fragments), each visible during a 17-ms time frame. The first experiment varied the contrast of the fragments. There were substantial individual differences in contrast sensitivity, so stimulus displays in the masking experiments that followed were calibrated to the sensitivity of each participant. Masks were displayed either as patterns that filled the entire screen (full field) or as successive strips that were sliced from the pattern, each strip lying across the location of the letter fragment that had been shown a moment before. Contrast of masks were varied to be lighter or darker than the letter fragments. Full-field masks, whether light or dark, provided relatively little impairment of recognition, as was the case for mask strips that were lighter than the letter fragments. However, dark strip masks proved to be very effective, with the degree of recognition impairment becoming larger as mask contrast was increased. A final experiment found the strip masks to be most effective when they overlapped the location where the letter fragments had been shown a moment before. They became progressively less effective with increased spatial separation from that location. Results are discussed with extensive reference to potential brain mechanisms for integrating shape cues.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"9"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11174100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141297191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C L Rodríguez Deliz, Gerick M Lee, Brittany N Bushnell, Najib J Majaj, J Anthony Movshon, Lynne Kiorpes
{"title":"Development of radial frequency pattern perception in macaque monkeys.","authors":"C L Rodríguez Deliz, Gerick M Lee, Brittany N Bushnell, Najib J Majaj, J Anthony Movshon, Lynne Kiorpes","doi":"10.1167/jov.24.6.6","DOIUrl":"10.1167/jov.24.6.6","url":null,"abstract":"<p><p>Infant primates see poorly, and most perceptual functions mature steadily beyond early infancy. Behavioral studies on human and macaque infants show that global form perception, as measured by the ability to integrate contour information into a coherent percept, improves dramatically throughout the first several years after birth. However, it is unknown when sensitivity to curvature and shape emerges in early life or how it develops. We studied the development of shape sensitivity in 18 macaques, aged 2 months to 10 years. Using radial frequency stimuli, circular targets whose radii are modulated sinusoidally, we tested monkeys' ability to radial frequency stimuli from circles as a function of the depth and frequency of sinusoidal modulation. We implemented a new four-choice oddity task and compared the resulting data with that from a traditional two-alternative forced choice task. We found that radial frequency pattern perception was measurable at the youngest age tested (2 months). Behavioral performance at all radial frequencies improved with age. Performance was better for higher radial frequencies, suggesting the developing visual system prioritizes processing of fine visual details that are ecologically relevant. By using two complementary methods, we were able to capture a comprehensive developmental trajectory for shape perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11160949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285115","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maximilian Davide Broda, Petra Borovska, Benjamin de Haas
{"title":"Individual differences in face salience and rapid face saccades.","authors":"Maximilian Davide Broda, Petra Borovska, Benjamin de Haas","doi":"10.1167/jov.24.6.16","DOIUrl":"10.1167/jov.24.6.16","url":null,"abstract":"<p><p>Humans saccade to faces in their periphery faster than to other types of objects. Previous research has highlighted the potential importance of the upper face region in this phenomenon, but it remains unclear whether this is driven by the eye region. Similarly, it remains unclear whether such rapid saccades are exclusive to faces or generalize to other semantically salient stimuli. Furthermore, it is unknown whether individuals differ in their face-specific saccadic reaction times and, if so, whether such differences could be linked to differences in face fixations during free viewing. To explore these open questions, we invited 77 participants to perform a saccadic choice task in which we contrasted faces as well as other salient objects, particularly isolated face features and text, with cars. Additionally, participants freely viewed 700 images of complex natural scenes in a separate session, which allowed us to determine the individual proportion of first fixations falling on faces. For the saccadic choice task, we found advantages for all categories of interest over cars. However, this effect was most pronounced for images of full faces. Full faces also elicited faster saccades compared with eyes, showing that isolated eye regions are not sufficient to elicit face-like responses. Additionally, we found consistent individual differences in saccadic reaction times toward faces that weakly correlated with face salience during free viewing. Our results suggest a link between semantic salience and rapid detection, but underscore the unique status of faces. Further research is needed to resolve the mechanisms underlying rapid face saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11204136/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141443662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Convolutional neural network models applied to neuronal responses in macaque V1 reveal limited nonlinear processing.","authors":"Hui-Yuan Miao, Frank Tong","doi":"10.1167/jov.24.6.1","DOIUrl":"10.1167/jov.24.6.1","url":null,"abstract":"<p><p>Computational models of the primary visual cortex (V1) have suggested that V1 neurons behave like Gabor filters followed by simple nonlinearities. However, recent work employing convolutional neural network (CNN) models has suggested that V1 relies on far more nonlinear computations than previously thought. Specifically, unit responses in an intermediate layer of VGG-19 were found to best predict macaque V1 responses to thousands of natural and synthetic images. Here, we evaluated the hypothesis that the poor performance of lower layer units in VGG-19 might be attributable to their small receptive field size rather than to their lack of complexity per se. We compared VGG-19 with AlexNet, which has much larger receptive fields in its lower layers. Whereas the best-performing layer of VGG-19 occurred after seven nonlinear steps, the first convolutional layer of AlexNet best predicted V1 responses. Although the predictive accuracy of VGG-19 was somewhat better than that of standard AlexNet, we found that a modified version of AlexNet could match the performance of VGG-19 after only a few nonlinear computations. Control analyses revealed that decreasing the size of the input images caused the best-performing layer of VGG-19 to shift to a lower layer, consistent with the hypothesis that the relationship between image size and receptive field size can strongly affect model performance. We conducted additional analyses using a Gabor pyramid model to test for nonlinear contributions of normalization and contrast saturation. Overall, our findings suggest that the feedforward responses of V1 neurons can be well explained by assuming only a few nonlinear processing stages.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"1"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11156204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141201127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Maxwell J Greene, Alexandra E Boehm, John E Vanston, Vimal P Pandiyan, Ramkumar Sabesan, William S Tuten
{"title":"Unique yellow shifts for small and brief stimuli in the central retina.","authors":"Maxwell J Greene, Alexandra E Boehm, John E Vanston, Vimal P Pandiyan, Ramkumar Sabesan, William S Tuten","doi":"10.1167/jov.24.6.2","DOIUrl":"10.1167/jov.24.6.2","url":null,"abstract":"<p><p>The spectral locus of unique yellow was determined for flashes of different sizes (<11 arcmin) and durations (<500 ms) presented in and near the fovea. An adaptive optics scanning laser ophthalmoscope was used to minimize the effects of higher-order aberrations during simultaneous stimulus delivery and retinal imaging. In certain subjects, parafoveal cones were classified as L, M, or S, which permitted the comparison of unique yellow measurements with variations in local L/M ratios within and between observers. Unique yellow shifted to longer wavelengths as stimulus size or duration was reduced. This effect is most pronounced for changes in size and more apparent in the fovea than in the parafovea. The observed variations in unique yellow are not entirely predicted from variations in L/M ratio and therefore implicate neural processes beyond photoreception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"2"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11156209/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141236779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tom Arthur, Samuel Vine, Mark Wilson, David Harris
{"title":"The role of prediction and visual tracking strategies during manual interception: An exploration of individual differences.","authors":"Tom Arthur, Samuel Vine, Mark Wilson, David Harris","doi":"10.1167/jov.24.6.4","DOIUrl":"10.1167/jov.24.6.4","url":null,"abstract":"<p><p>The interception (or avoidance) of moving objects is a common component of various daily living tasks; however, it remains unclear whether precise alignment of foveal vision with a target is important for motor performance. Furthermore, there has also been little examination of individual differences in visual tracking strategy and the use of anticipatory gaze adjustments. We examined the importance of in-flight tracking and predictive visual behaviors using a virtual reality environment that required participants (n = 41) to intercept tennis balls projected from one of two possible locations. Here, we explored whether different tracking strategies spontaneously arose during the task, and which were most effective. Although indices of closer in-flight tracking (pursuit gain, tracking coherence, tracking lag, and saccades) were predictive of better interception performance, these relationships were rather weak. Anticipatory gaze shifts toward the correct release location of the ball provided no benefit for subsequent interception. Nonetheless, two interceptive strategies were evident: 1) early anticipation of the ball's onset location followed by attempts to closely track the ball in flight (i.e., predictive strategy); or 2) positioning gaze between possible onset locations and then using peripheral vision to locate the moving ball (i.e., a visual pivot strategy). Despite showing much poorer in-flight foveal tracking of the ball, participants adopting a visual pivot strategy performed slightly better in the task. Overall, these results indicate that precise alignment of the fovea with the target may not be critical for interception tasks, but that observers can adopt quite varied visual guidance approaches.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"4"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11160954/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141262177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jordy Thielen, Tessa M van Leeuwen, Simon J Hazenberg, Anna Z L Wester, Floris P de Lange, Rob van Lier
{"title":"Amodal completion across the brain: The impact of structure and knowledge.","authors":"Jordy Thielen, Tessa M van Leeuwen, Simon J Hazenberg, Anna Z L Wester, Floris P de Lange, Rob van Lier","doi":"10.1167/jov.24.6.10","DOIUrl":"10.1167/jov.24.6.10","url":null,"abstract":"<p><p>This study investigates the phenomenon of amodal completion within the context of naturalistic objects, employing a repetition suppression paradigm to disentangle the influence of structure and knowledge cues on how objects are completed. The research focuses on early visual cortex (EVC) and lateral occipital complex (LOC), shedding light on how these brain regions respond to different completion scenarios. In LOC, we observed suppressed responses to structure and knowledge-compatible stimuli, providing evidence that both cues influence neural processing in higher-level visual areas. However, in EVC, we did not find evidence for differential responses to completions compatible or incompatible with either structural or knowledge-based expectations. Together, our findings suggest that the interplay between structure and knowledge cues in amodal completion predominantly impacts higher-level visual processing, with less pronounced effects on the early visual cortex. This study contributes to our understanding of the complex mechanisms underlying visual perception and highlights the distinct roles played by different brain regions in amodal completion.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 6","pages":"10"},"PeriodicalIF":1.8,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11185268/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141312129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}