Journal of Vision最新文献

筛选
英文 中文
Binocular integration of chromatic and luminance signals. 色度和亮度信号的双眼整合。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.7
Daniel H Baker, Kirralise J Hansford, Federico G Segala, Anisa Y Morsi, Rowan J Huxley, Joel T Martin, Maya Rockman, Alex R Wade
{"title":"Binocular integration of chromatic and luminance signals.","authors":"Daniel H Baker, Kirralise J Hansford, Federico G Segala, Anisa Y Morsi, Rowan J Huxley, Joel T Martin, Maya Rockman, Alex R Wade","doi":"10.1167/jov.24.12.7","DOIUrl":"10.1167/jov.24.12.7","url":null,"abstract":"<p><p>Much progress has been made in understanding how the brain combines signals from the two eyes. However, most of this work has involved achromatic (black and white) stimuli, and it is not clear if the same processes apply in color-sensitive pathways. In our first experiment, we measured contrast discrimination (\"dipper\") functions for four key ocular configurations (monocular, binocular, half-binocular, and dichoptic), for achromatic, isoluminant L-M and isoluminant S-(L+M) sine-wave grating stimuli (L: long-, M: medium-, S: short-wavelength). We find a similar pattern of results across stimuli, implying equivalently strong interocular suppression within each pathway. Our second experiment measured dichoptic masking within and between pathways using the method of constant stimuli. Masking was strongest within-pathway and weakest between S-(L+M) and achromatic mechanisms. Finally, we repeated the dipper experiment using temporal luminance modulations, which produced slightly weaker interocular suppression than for spatially modulated stimuli. We interpret our results in the context of a contemporary two-stage model of binocular contrast gain control, implemented here using a hierarchical Bayesian framework. Posterior distributions of the weight of interocular suppression overlapped with a value of 1 for all dipper data sets, and the model captured well the pattern of thresholds from all three experiments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142582746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual differences reveal similarities in serial dependence effects across perceptual tasks, but not to oculomotor tasks. 个体差异揭示了知觉任务中序列依赖效应的相似性,但不包括眼球运动任务。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.2
Shuchen Guan, Alexander Goettker
{"title":"Individual differences reveal similarities in serial dependence effects across perceptual tasks, but not to oculomotor tasks.","authors":"Shuchen Guan, Alexander Goettker","doi":"10.1167/jov.24.12.2","DOIUrl":"10.1167/jov.24.12.2","url":null,"abstract":"<p><p>Serial dependence effects have been observed across a wide range of perceptual and oculomotor tasks. This opens up the question of whether these effects observed share underlying mechanisms. Here we measured serial dependence effects in a semipredictable environment for the same group of observers across four different tasks, two perceptual (color and orientation judgments) and two oculomotor (tracking moving targets and the pupil light reflex). By leveraging individual differences, we searched for links in the magnitude of serial dependence effects across the different tasks. On the group level, we observed significant attractive serial dependence effects for all tasks, except the pupil response. The rare absence of a serial dependence effect for the reflex-like pupil light response suggests that sequential effects require cortical processing or even higher-level cognition. For the tasks with significant serial dependence effects, there was substantial and reliable variance in the magnitude of the sequential effects. We observed a significant relationship in the strength of serial dependence for the two perceptual tasks, but no relation between the perceptual tasks and oculomotor tracking. This emphasizes differences in processing between perception and oculomotor control. The lack of a correlation across all tasks indicates that it is unlikely that the relation between the individual differences in the magnitude of serial dependence is driven by more general mechanisms related to for example working memory. It suggests that there are other shared perceptual or decisional mechanisms for serial dependence effects across different low-level perceptual tasks.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142568597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How the window of visibility varies around polar angle. 能见度窗口如何随极角变化。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.4
Yuna Kwak, Zhong-Lin Lu, Marisa Carrasco
{"title":"How the window of visibility varies around polar angle.","authors":"Yuna Kwak, Zhong-Lin Lu, Marisa Carrasco","doi":"10.1167/jov.24.12.4","DOIUrl":"10.1167/jov.24.12.4","url":null,"abstract":"<p><p>Contrast sensitivity, the amount of contrast required to discriminate an object, depends on spatial frequency (SF). The contrast sensitivity function (CSF) peaks at intermediate SFs and drops at other SFs. The CSF varies from foveal to peripheral vision, but only a couple of studies have assessed how the CSF changes with polar angle of the visual field. For many visual dimensions, sensitivity is better along the horizontal than the vertical meridian and at the lower than the upper vertical meridian, yielding polar angle asymmetries. Here, for the first time, to our knowledge, we investigate CSF attributes around polar angle at both group and individual levels and examine the relations in CSFs across locations and individual observers. To do so, we used hierarchical Bayesian modeling, which enables precise estimation of CSF parameters. At the group level, maximum contrast sensitivity and the SF at which the sensitivity peaks are higher at the horizontal than vertical meridian and at the lower than the upper vertical meridian. By analyzing the covariance across observers (n = 28), we found that, at the individual level, CSF attributes (e.g., maximum sensitivity) across locations are highly correlated. This correlation indicates that, although the CSFs differ across locations, the CSF at one location is predictive of that at another location. Within each location, the CSF attributes covary, indicating that CSFs across individuals vary in a consistent manner (e.g., as maximum sensitivity increases, so does the corresponding SF), but more so at the horizontal than the vertical meridian locations. These results show similarities and uncover some critical polar angle differences across locations and individuals, suggesting that the CSF should not be generalized across isoeccentric locations around the visual field.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542588/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced visual contrast suppression during peak psilocybin effects: Psychophysical results from a pilot randomized controlled trial. 在迷幻药峰值效应期间视觉对比度抑制增强:随机对照试验的心理物理结果。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.5
Link Ray Swanson, Sophia Jungers, Ranji Varghese, Kathryn R Cullen, Michael D Evans, Jessica L Nielson, Michael-Paul Schallmo
{"title":"Enhanced visual contrast suppression during peak psilocybin effects: Psychophysical results from a pilot randomized controlled trial.","authors":"Link Ray Swanson, Sophia Jungers, Ranji Varghese, Kathryn R Cullen, Michael D Evans, Jessica L Nielson, Michael-Paul Schallmo","doi":"10.1167/jov.24.12.5","DOIUrl":"10.1167/jov.24.12.5","url":null,"abstract":"<p><p>In visual perception, an effect known as surround suppression occurs wherein the apparent contrast of a center stimulus is reduced when it is presented within a higher-contrast surrounding stimulus. Many key aspects of visual perception involve surround suppression, yet the neuromodulatory processes involved remain unclear. Psilocybin is a serotonergic psychedelic compound known for its robust effects on visual perception, particularly texture, color, object, and motion perception. We asked whether surround suppression is altered under peak effects of psilocybin. Using a contrast-matching task with different center-surround stimulus configurations, we measured surround suppression after 25 mg of psilocybin compared with placebo (100 mg niacin). Data on harms were collected, and no serious adverse events were reported. After taking psilocybin, participants (n = 6) reported stronger surround suppression of perceived contrast compared to placebo. Furthermore, we found that the intensity of subjective psychedelic visuals induced by psilocybin correlated positively with the magnitude of surround suppression. We note the potential relevance of our findings for the field of psychiatry, given that studies have demonstrated weakened visual surround suppression in both major depressive disorder and schizophrenia. Our findings are thus relevant to understanding the visual effects of psilocybin, and the potential mechanisms of visual disruption in mental health disorders.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the relationship between subjective perception and unconscious feature integration. 调查主观感知与无意识特征整合之间的关系
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.1
Lukas Vogelsang, Maëlan Q Menétrey, Leila Drissi-Daoudi, Michael H Herzog
{"title":"Investigating the relationship between subjective perception and unconscious feature integration.","authors":"Lukas Vogelsang, Maëlan Q Menétrey, Leila Drissi-Daoudi, Michael H Herzog","doi":"10.1167/jov.24.12.1","DOIUrl":"10.1167/jov.24.12.1","url":null,"abstract":"<p><p>Visual features need to be temporally integrated to detect motion signals and solve the many ill-posed problems of vision. It has previously been shown that such integration occurs in windows of unconscious processing of up to 450 milliseconds. However, whether features are integrated should be governed by perceptually meaningful mechanisms. Here, we expand on previous findings suggesting that subjective perception and integration may be linked. Specifically, different observers were found to group elements differently and to exhibit corresponding feature integration behavior. If the former were to influence the latter, perception would appear to not only be the outcome of integration but to potentially also be part of it. To test any such linkages more systematically, we here examined the role of one of the key perceptual grouping cues, color similarity, in the Sequential Metacontrast Paradigm (SQM). In the SQM, participants are presented with two streams of lines that are expanding from the center outwards. If several lines in the attended motion stream are offset, offsets integrate unconsciously and mandatorily for periods of up to 450 milliseconds. Across three experiments, we presented lines of varied colors. Our results reveal that individuals who perceive differently colored lines as \"popping out\" from the motion stream do not exhibit mandatory integration but that individuals who perceive such lines as part of an integrated motion stream do show offset integration behavior across the entire stream. These results attest to the proposed linkage between subjective perception and integration behavior in the SQM.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142568724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep convolutional neural networks are sensitive to face configuration. 深度卷积神经网络对人脸结构非常敏感。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.6
Virginia E Strehle, Natalie K Bendiksen, Alice J O'Toole
{"title":"Deep convolutional neural networks are sensitive to face configuration.","authors":"Virginia E Strehle, Natalie K Bendiksen, Alice J O'Toole","doi":"10.1167/jov.24.12.6","DOIUrl":"10.1167/jov.24.12.6","url":null,"abstract":"<p><p>Deep convolutional neural networks (DCNNs) are remarkably accurate models of human face recognition. However, less is known about whether these models generate face representations similar to those used by humans. Sensitivity to facial configuration has long been considered a marker of human perceptual expertise for faces. We tested whether DCNNs trained for face identification \"perceive\" alterations to facial features and their configuration. We also compared the extent to which representations changed as a function of the alteration type. Facial configuration was altered by changing the distance between the eyes or the distance between the nose and mouth. Facial features were altered by replacing the eyes or mouth with those of another face. Altered faces were processed by DCNNs (Ranjan et al., 2018; Szegedy et al., 2017) and the similarity of the generated representations was compared. Both DCNNs were sensitive to configural and feature changes-with changes to configuration altering the DCNN representations more than changes to face features. To determine whether the DCNNs' greater sensitivity to configuration was due to a priori differences in the images or characteristics of the DCNN processing, we compared the representation of features and configuration between the low-level, pixel-based representations and the DCNN-generated representations. Sensitivity to face configuration increased from the pixel-level image to the DCNN encoding, whereas the sensitivity to features did not change. The enhancement of configural information may be due to the utility of configuration for discriminating among similar faces combined with the within-category nature of face identification training.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11542502/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ocular biometric responses to simulated polychromatic defocus. 模拟多色散焦的眼部生物测量反应。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-11-04 DOI: 10.1167/jov.24.12.3
Sowmya Ravikumar, Elise N Harb, Karen E Molina, Sarah E Singh, Joel Segre, Christine F Wildsoet
{"title":"Ocular biometric responses to simulated polychromatic defocus.","authors":"Sowmya Ravikumar, Elise N Harb, Karen E Molina, Sarah E Singh, Joel Segre, Christine F Wildsoet","doi":"10.1167/jov.24.12.3","DOIUrl":"10.1167/jov.24.12.3","url":null,"abstract":"<p><p>Evidence from human studies of ocular accommodation and studies of animals reared in monochromatic conditions suggest that chromatic signals can guide ocular growth. We hypothesized that ocular biometric response in humans can be manipulated by simulating the chromatic contrast differences associated with imposition of optical defocus. The red, green, and blue (RGB) channels of an RGB movie of the natural world were individually incorporated with computational defocus to create two different movie stimuli. The magnitude of defocus incorporated in the red and blue layers was chosen such that, in one case, it simulated +3 D defocus, referred to as color-signed myopic (CSM) defocus, and in another case it simulated -3 D defocus, referred to as color-signed hyperopic (CSH) defocus. Seventeen subjects viewed the reference stimulus (unaltered movie) and at least one of the two color-signed defocus stimuli for ∼1 hour. Axial length (AL) and choroidal thickness (ChT) were measured immediately before and after each session. AL and subfoveal ChT showed no significant change under any of the three conditions. A significant increase in vitreous chamber depth (VCD) was observed following viewing of the CSH stimulus compared with the reference stimulus (0.034 ± 0.03 mm and 0 ± 0.02 mm, respectively; p = 0.018). A significant thinning of the crystalline lens was observed following viewing of the CSH stimulus relative to the CSM stimulus (-0.033 ± 0.03 mm and 0.001 ± 0.03 mm, respectively; p = 0.015). Differences in the effects of CSM and CSH conditions on VCD and lens thickness suggest a directional, modulatory influence of chromatic defocus. On the other hand, ChT responses showed large variability, rendering it an unreliable biomarker for chromatic defocus-driven responses, at least for the conditions of this study.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11540029/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142583420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Microsaccadic suppression of peripheral perceptual detection performance as a function of foveated visual image appearance. 外围知觉检测性能的微弛豫抑制是眼窝视觉图像外观的函数。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.3
Julia Greilich, Matthias P Baumann, Ziad M Hafed
{"title":"Microsaccadic suppression of peripheral perceptual detection performance as a function of foveated visual image appearance.","authors":"Julia Greilich, Matthias P Baumann, Ziad M Hafed","doi":"10.1167/jov.24.11.3","DOIUrl":"10.1167/jov.24.11.3","url":null,"abstract":"<p><p>Microsaccades are known to be associated with a deficit in perceptual detection performance for brief probe flashes presented in their temporal vicinity. However, it is still not clear how such a deficit might depend on the visual environment across which microsaccades are generated. Here, and motivated by studies demonstrating an interaction between visual background image appearance and perceptual suppression strength associated with large saccades, we probed peripheral perceptual detection performance of human subjects while they generated microsaccades over three different visual backgrounds. Subjects fixated near the center of a low spatial frequency grating, a high spatial frequency grating, or a small white fixation spot over an otherwise gray background. When a computer process detected a microsaccade, it presented a brief peripheral probe flash at one of four locations (over a uniform gray background) and at different times. After collecting full psychometric curves, we found that both perceptual detection thresholds and slopes of psychometric curves were impaired for peripheral flashes in the immediate temporal vicinity of microsaccades, and they recovered with later flash times. Importantly, the threshold elevations, but not the psychometric slope reductions, were stronger for the white fixation spot than for either of the two gratings. Thus, like with larger saccades, microsaccadic suppression strength can show a certain degree of image dependence. However, unlike with larger saccades, stronger microsaccadic suppression did not occur with low spatial frequency textures. This observation might reflect the different spatiotemporal retinal transients associated with the small microsaccades in our study versus larger saccades.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11457924/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video. 视觉体验数据集:超过 200 个小时的综合眼球运动、里程测量和自我中心视频记录。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.6
Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann
{"title":"The visual experience dataset: Over 200 recorded hours of integrated eye movement, odometry, and egocentric video.","authors":"Michelle R Greene, Benjamin J Balas, Mark D Lescroart, Paul R MacNeilage, Jennifer A Hart, Kamran Binaee, Peter A Hausamann, Ronald Mezile, Bharath Shankar, Christian B Sinnott, Kaylie Capurro, Savannah Halow, Hunter Howe, Mariam Josyula, Annie Li, Abraham Mieses, Amina Mohamed, Ilya Nudnou, Ezra Parkhill, Peter Riley, Brett Schmidt, Matthew W Shinkle, Wentao Si, Brian Szekely, Joaquin M Torres, Eliana Weissmann","doi":"10.1167/jov.24.11.6","DOIUrl":"10.1167/jov.24.11.6","url":null,"abstract":"<p><p>We introduce the Visual Experience Dataset (VEDB), a compilation of more than 240 hours of egocentric video combined with gaze- and head-tracking data that offer an unprecedented view of the visual world as experienced by human observers. The dataset consists of 717 sessions, recorded by 56 observers ranging from 7 to 46 years of age. This article outlines the data collection, processing, and labeling protocols undertaken to ensure a representative sample and discusses the potential sources of error or bias within the dataset. The VEDB's potential applications are vast, including improving gaze-tracking methodologies, assessing spatiotemporal image statistics, and refining deep neural networks for scene and activity recognition. The VEDB is accessible through established open science platforms and is intended to be a living dataset with plans for expansion and community contributions. It is released with an emphasis on ethical considerations, such as participant privacy and the mitigation of potential biases. By providing a dataset grounded in real-world experiences and accompanied by extensive metadata and supporting code, the authors invite the research community to use and contribute to the VEDB, facilitating a richer understanding of visual perception and behavior in naturalistic settings.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11466363/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions. 在正常和受损的观察条件下,听觉和视觉线索在空间导航中的整合。
IF 2 4区 心理学
Journal of Vision Pub Date : 2024-10-03 DOI: 10.1167/jov.24.11.7
Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr
{"title":"Integration of auditory and visual cues in spatial navigation under normal and impaired viewing conditions.","authors":"Corey S Shayman, Maggie K McCracken, Hunter C Finney, Peter C Fino, Jeanine K Stefanucci, Sarah H Creem-Regehr","doi":"10.1167/jov.24.11.7","DOIUrl":"10.1167/jov.24.11.7","url":null,"abstract":"<p><p>Auditory landmarks can contribute to spatial updating during navigation with vision. Whereas large inter-individual differences have been identified in how navigators combine auditory and visual landmarks, it is still unclear under what circumstances audition is used. Further, whether or not individuals optimally combine auditory cues with visual cues to decrease the amount of perceptual uncertainty, or variability, has not been well-documented. Here, we test audiovisual integration during spatial updating in a virtual navigation task. In Experiment 1, 24 individuals with normal sensory acuity completed a triangular homing task with either visual landmarks, auditory landmarks, or both. In addition, participants experienced a fourth condition with a covert spatial conflict where auditory landmarks were rotated relative to visual landmarks. Participants generally relied more on visual landmarks than auditory landmarks and were no more accurate with multisensory cues than with vision alone. In Experiment 2, a new group of 24 individuals completed the same task, but with simulated low vision in the form of a blur filter to increase visual uncertainty. Again, participants relied more on visual landmarks than auditory ones and no multisensory benefit occurred. Participants navigating with blur did not rely more on their hearing compared with the group that navigated with normal vision. These results support previous research showing that one sensory modality at a time may be sufficient for spatial updating, even under impaired viewing conditions. Future research could investigate task- and participant-specific factors that lead to different strategies of multisensory cue combination with auditory and visual cues.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":null,"pages":null},"PeriodicalIF":2.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11469273/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142394804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信