Daniel H Baker, Kirralise J Hansford, Federico G Segala, Anisa Y Morsi, Rowan J Huxley, Joel T Martin, Maya Rockman, Alex R Wade
{"title":"Binocular integration of chromatic and luminance signals.","authors":"Daniel H Baker, Kirralise J Hansford, Federico G Segala, Anisa Y Morsi, Rowan J Huxley, Joel T Martin, Maya Rockman, Alex R Wade","doi":"10.1167/jov.24.12.7","DOIUrl":"10.1167/jov.24.12.7","url":null,"abstract":"<p><p>Much progress has been made in understanding how the brain combines signals from the two eyes. However, most of this work has involved achromatic (black and white) stimuli, and it is not clear if the same processes apply in color-sensitive pathways. In our first experiment, we measured contrast discrimination (\"dipper\") functions for four key ocular configurations (monocular, binocular, half-binocular, and dichoptic), for achromatic, isoluminant L-M and isoluminant S-(L+M) sine-wave grating stimuli (L: long-, M: medium-, S: short-wavelength). We find a similar pattern of results across stimuli, implying equivalently strong interocular suppression within each pathway. Our second experiment measured dichoptic masking within and between pathways using the method of constant stimuli. Masking was strongest within-pathway and weakest between S-(L+M) and achromatic mechanisms. Finally, we repeated the dipper experiment using temporal luminance modulations, which produced slightly weaker interocular suppression than for spatially modulated stimuli. We interpret our results in the context of a contemporary two-stage model of binocular contrast gain control, implemented here using a hierarchical Bayesian framework. Posterior distributions of the weight of interocular suppression overlapped with a value of 1 for all dipper data sets, and the model captured well the pattern of thresholds from all three experiments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"7"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11556357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Individual differences reveal similarities in serial dependence effects across perceptual tasks, but not to oculomotor tasks.","authors":"Shuchen Guan, Alexander Goettker","doi":"10.1167/jov.24.12.2","DOIUrl":"10.1167/jov.24.12.2","url":null,"abstract":"<p><p>Serial dependence effects have been observed across a wide range of perceptual and oculomotor tasks. This opens up the question of whether these effects observed share underlying mechanisms. Here we measured serial dependence effects in a semipredictable environment for the same group of observers across four different tasks, two perceptual (color and orientation judgments) and two oculomotor (tracking moving targets and the pupil light reflex). By leveraging individual differences, we searched for links in the magnitude of serial dependence effects across the different tasks. On the group level, we observed significant attractive serial dependence effects for all tasks, except the pupil response. The rare absence of a serial dependence effect for the reflex-like pupil light response suggests that sequential effects require cortical processing or even higher-level cognition. For the tasks with significant serial dependence effects, there was substantial and reliable variance in the magnitude of the sequential effects. We observed a significant relationship in the strength of serial dependence for the two perceptual tasks, but no relation between the perceptual tasks and oculomotor tracking. This emphasizes differences in processing between perception and oculomotor control. The lack of a correlation across all tasks indicates that it is unlikely that the relation between the individual differences in the magnitude of serial dependence is driven by more general mechanisms related to for example working memory. It suggests that there are other shared perceptual or decisional mechanisms for serial dependence effects across different low-level perceptual tasks.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142568597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How the window of visibility varies around polar angle.","authors":"Yuna Kwak, Zhong-Lin Lu, Marisa Carrasco","doi":"10.1167/jov.24.12.4","DOIUrl":"10.1167/jov.24.12.4","url":null,"abstract":"<p><p>Contrast sensitivity, the amount of contrast required to discriminate an object, depends on spatial frequency (SF). The contrast sensitivity function (CSF) peaks at intermediate SFs and drops at other SFs. The CSF varies from foveal to peripheral vision, but only a couple of studies have assessed how the CSF changes with polar angle of the visual field. For many visual dimensions, sensitivity is better along the horizontal than the vertical meridian and at the lower than the upper vertical meridian, yielding polar angle asymmetries. Here, for the first time, to our knowledge, we investigate CSF attributes around polar angle at both group and individual levels and examine the relations in CSFs across locations and individual observers. To do so, we used hierarchical Bayesian modeling, which enables precise estimation of CSF parameters. At the group level, maximum contrast sensitivity and the SF at which the sensitivity peaks are higher at the horizontal than vertical meridian and at the lower than the upper vertical meridian. By analyzing the covariance across observers (n = 28), we found that, at the individual level, CSF attributes (e.g., maximum sensitivity) across locations are highly correlated. This correlation indicates that, although the CSFs differ across locations, the CSF at one location is predictive of that at another location. Within each location, the CSF attributes covary, indicating that CSFs across individuals vary in a consistent manner (e.g., as maximum sensitivity increases, so does the corresponding SF), but more so at the horizontal than the vertical meridian locations. These results show similarities and uncover some critical polar angle differences across locations and individuals, suggesting that the CSF should not be generalized across isoeccentric locations around the visual field.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"4"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jolande Fooken, Parsa Balalaie, Kayne Park, J Randall Flanagan, Stephen H Scott
{"title":"Rapid eye and hand responses in an interception task are differentially modulated by context-dependent predictability.","authors":"Jolande Fooken, Parsa Balalaie, Kayne Park, J Randall Flanagan, Stephen H Scott","doi":"10.1167/jov.24.12.10","DOIUrl":"10.1167/jov.24.12.10","url":null,"abstract":"<p><p>When catching a falling ball or avoiding a collision with traffic, humans can quickly generate eye and limb responses to unpredictable changes in their environment. Mechanisms of limb and oculomotor control when responding to sudden changes in the environment have mostly been investigated independently. Here, we investigated eye-hand coordination in a rapid interception task where human participants used a virtual paddle to intercept a moving target. The target moved vertically down a computer screen and could suddenly jump to the left or right. In high-certainty blocks, the target always jumped; in low-certainty blocks, the target only jumped in a portion of the trials. Further, we manipulated response urgency by varying the time of target jumps, with early jumps requiring less urgent responses and late jumps requiring more urgent responses. Our results highlight differential effects of certainty and urgency on eye-hand coordination. Participants initiated both eye and hand responses earlier for high-certainty compared with low-certainty blocks. Hand reaction times decreased and response vigor increased with increasing urgency levels. However, eye reaction times were lowest for medium-urgency levels and eye vigor was unaffected by urgency. Across all trials, we found a weak positive correlation between eye and hand responses. Taken together, these results suggest that the limb and oculomotor systems use similar early sensorimotor processing; however, rapid responses are modulated differentially to attain system-specific sensorimotor goals.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11578145/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Link Ray Swanson, Sophia Jungers, Ranji Varghese, Kathryn R Cullen, Michael D Evans, Jessica L Nielson, Michael-Paul Schallmo
{"title":"Enhanced visual contrast suppression during peak psilocybin effects: Psychophysical results from a pilot randomized controlled trial.","authors":"Link Ray Swanson, Sophia Jungers, Ranji Varghese, Kathryn R Cullen, Michael D Evans, Jessica L Nielson, Michael-Paul Schallmo","doi":"10.1167/jov.24.12.5","DOIUrl":"10.1167/jov.24.12.5","url":null,"abstract":"<p><p>In visual perception, an effect known as surround suppression occurs wherein the apparent contrast of a center stimulus is reduced when it is presented within a higher-contrast surrounding stimulus. Many key aspects of visual perception involve surround suppression, yet the neuromodulatory processes involved remain unclear. Psilocybin is a serotonergic psychedelic compound known for its robust effects on visual perception, particularly texture, color, object, and motion perception. We asked whether surround suppression is altered under peak effects of psilocybin. Using a contrast-matching task with different center-surround stimulus configurations, we measured surround suppression after 25 mg of psilocybin compared with placebo (100 mg niacin). Data on harms were collected, and no serious adverse events were reported. After taking psilocybin, participants (n = 6) reported stronger surround suppression of perceived contrast compared to placebo. Furthermore, we found that the intensity of subjective psychedelic visuals induced by psilocybin correlated positively with the magnitude of surround suppression. We note the potential relevance of our findings for the field of psychiatry, given that studies have demonstrated weakened visual surround suppression in both major depressive disorder and schizophrenia. Our findings are thus relevant to understanding the visual effects of psilocybin, and the potential mechanisms of visual disruption in mental health disorders.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"5"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Flexible Relations Between Confidence and Confidence RTs in Post-Decisional Models of Confidence: A Reply to Chen and Rahnev.","authors":"Stef Herregods, Luc Vermeylen, Kobe Desender","doi":"10.1167/jov.24.12.9","DOIUrl":"10.1167/jov.24.12.9","url":null,"abstract":"","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11572761/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142631667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modeling the dynamics of contextual cueing effect by reinforcement learning.","authors":"Yasuhiro Hatori, Zheng-Xiong Yuan, Chia-Huei Tseng, Ichiro Kuriki, Satoshi Shioiri","doi":"10.1167/jov.24.12.11","DOIUrl":"10.1167/jov.24.12.11","url":null,"abstract":"<p><p>Humans use environmental context for facilitating object searches. The benefit of context for visual search requires learning. Modeling the learning process of context for efficient processing is vital to understanding visual function in everyday environments. We proposed a model that accounts for the contextual cueing effect, which refers to the learning effect of scene context to identify the location of a target item. The model extracted the global feature of a scene and gradually strengthened the relationship between the global feature and its target location with repeated observations. We compared the model and human performance with two visual search experiments (letter arrangements on a gray background or a natural scene). The proposed model successfully simulated the faster reduction of the number of saccades required before target detection for the natural scene background compared with the uniform gray background. We further tested whether the model replicated the known characteristics of the contextual cueing effect in terms of local learning around the target, the effect of the ratio of repeated and novel stimuli, and the superiority of natural scenes.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"11"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11578146/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukas Vogelsang, Maëlan Q Menétrey, Leila Drissi-Daoudi, Michael H Herzog
{"title":"Investigating the relationship between subjective perception and unconscious feature integration.","authors":"Lukas Vogelsang, Maëlan Q Menétrey, Leila Drissi-Daoudi, Michael H Herzog","doi":"10.1167/jov.24.12.1","DOIUrl":"10.1167/jov.24.12.1","url":null,"abstract":"<p><p>Visual features need to be temporally integrated to detect motion signals and solve the many ill-posed problems of vision. It has previously been shown that such integration occurs in windows of unconscious processing of up to 450 milliseconds. However, whether features are integrated should be governed by perceptually meaningful mechanisms. Here, we expand on previous findings suggesting that subjective perception and integration may be linked. Specifically, different observers were found to group elements differently and to exhibit corresponding feature integration behavior. If the former were to influence the latter, perception would appear to not only be the outcome of integration but to potentially also be part of it. To test any such linkages more systematically, we here examined the role of one of the key perceptual grouping cues, color similarity, in the Sequential Metacontrast Paradigm (SQM). In the SQM, participants are presented with two streams of lines that are expanding from the center outwards. If several lines in the attended motion stream are offset, offsets integrate unconsciously and mandatorily for periods of up to 450 milliseconds. Across three experiments, we presented lines of varied colors. Our results reveal that individuals who perceive differently colored lines as \"popping out\" from the motion stream do not exhibit mandatory integration but that individuals who perceive such lines as part of an integrated motion stream do show offset integration behavior across the entire stream. These results attest to the proposed linkage between subjective perception and integration behavior in the SQM.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142568724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virginia E Strehle, Natalie K Bendiksen, Alice J O'Toole
{"title":"Deep convolutional neural networks are sensitive to face configuration.","authors":"Virginia E Strehle, Natalie K Bendiksen, Alice J O'Toole","doi":"10.1167/jov.24.12.6","DOIUrl":"10.1167/jov.24.12.6","url":null,"abstract":"<p><p>Deep convolutional neural networks (DCNNs) are remarkably accurate models of human face recognition. However, less is known about whether these models generate face representations similar to those used by humans. Sensitivity to facial configuration has long been considered a marker of human perceptual expertise for faces. We tested whether DCNNs trained for face identification \"perceive\" alterations to facial features and their configuration. We also compared the extent to which representations changed as a function of the alteration type. Facial configuration was altered by changing the distance between the eyes or the distance between the nose and mouth. Facial features were altered by replacing the eyes or mouth with those of another face. Altered faces were processed by DCNNs (Ranjan et al., 2018; Szegedy et al., 2017) and the similarity of the generated representations was compared. Both DCNNs were sensitive to configural and feature changes-with changes to configuration altering the DCNN representations more than changes to face features. To determine whether the DCNNs' greater sensitivity to configuration was due to a priori differences in the images or characteristics of the DCNN processing, we compared the representation of features and configuration between the low-level, pixel-based representations and the DCNN-generated representations. Sensitivity to face configuration increased from the pixel-level image to the DCNN encoding, whereas the sensitivity to features did not change. The enhancement of configural information may be due to the utility of configuration for discriminating among similar faces combined with the within-category nature of face identification training.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542502/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sowmya Ravikumar, Elise N Harb, Karen E Molina, Sarah E Singh, Joel Segre, Christine F Wildsoet
{"title":"Ocular biometric responses to simulated polychromatic defocus.","authors":"Sowmya Ravikumar, Elise N Harb, Karen E Molina, Sarah E Singh, Joel Segre, Christine F Wildsoet","doi":"10.1167/jov.24.12.3","DOIUrl":"10.1167/jov.24.12.3","url":null,"abstract":"<p><p>Evidence from human studies of ocular accommodation and studies of animals reared in monochromatic conditions suggest that chromatic signals can guide ocular growth. We hypothesized that ocular biometric response in humans can be manipulated by simulating the chromatic contrast differences associated with imposition of optical defocus. The red, green, and blue (RGB) channels of an RGB movie of the natural world were individually incorporated with computational defocus to create two different movie stimuli. The magnitude of defocus incorporated in the red and blue layers was chosen such that, in one case, it simulated +3 D defocus, referred to as color-signed myopic (CSM) defocus, and in another case it simulated -3 D defocus, referred to as color-signed hyperopic (CSH) defocus. Seventeen subjects viewed the reference stimulus (unaltered movie) and at least one of the two color-signed defocus stimuli for ∼1 hour. Axial length (AL) and choroidal thickness (ChT) were measured immediately before and after each session. AL and subfoveal ChT showed no significant change under any of the three conditions. A significant increase in vitreous chamber depth (VCD) was observed following viewing of the CSH stimulus compared with the reference stimulus (0.034 ± 0.03 mm and 0 ± 0.02 mm, respectively; p = 0.018). A significant thinning of the crystalline lens was observed following viewing of the CSH stimulus relative to the CSM stimulus (-0.033 ± 0.03 mm and 0.001 ± 0.03 mm, respectively; p = 0.015). Differences in the effects of CSM and CSH conditions on VCD and lens thickness suggest a directional, modulatory influence of chromatic defocus. On the other hand, ChT responses showed large variability, rendering it an unreliable biomarker for chromatic defocus-driven responses, at least for the conditions of this study.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"24 12","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}