Journal of Vision最新文献

筛选
英文 中文
Variations in sensory eye dominance along the horizontal meridian. 沿水平子午线感觉眼优势的变化。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.1
Chris L E Paffen
{"title":"Variations in sensory eye dominance along the horizontal meridian.","authors":"Chris L E Paffen","doi":"10.1167/jov.25.8.1","DOIUrl":"10.1167/jov.25.8.1","url":null,"abstract":"<p><p>Sensory eye dominance refers to the dominance of one eye's input over the other during interocular conflict, such that, when discrepant images are presented dichoptically, one eye's image will dominate perception. This study focuses on how sensory eye dominance varies across visual space. Although some characteristics of variations in sensory eye dominance across visual space have been described before, results so far are largely conflicting. Here I argue that this conflict is caused by the fact that different studies used different methods to assess sensory eye dominance, combined with using a wide range of eccentricities. To systematically and continuously describe sensory eye dominance across the visual field, I used a novel method-tracking Continuous Flash Suppression-in which a visual target presented to a single eye moved across the horizontal meridian while being in constant competition with a dynamic mask presented to the other eye. Eye dominance across the visual field could be described and quantified using three factors: (1) a generic preference for the nasal visual field in combination with (2) an observer-dependent general bias for using the left, right, or neither eye. On top of these, some observers had (3) idiosyncratic biases in local sensory eye dominance. I argue that, while idiosynchratic local biases within an observer probably stem from optical, retinal, or cortical imbalances, the observed nasal advantage is functional: it allows to bias the interocular competition to fixated, partly occluded distant objects of interest.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12227019/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144545832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accommodative responses stimulated from the Maddox components of vergence in participants with normal binocular vision. 在双眼视力正常的受试者中,由收敛的Maddox成分刺激的调节反应。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.3
Sebastian N Fine, Thomas Rutkowski, Elio M Santos, Suril Gohel, Farzin Hajebrahimi, Mitchell Scheiman, Tara L Alvarez
{"title":"Accommodative responses stimulated from the Maddox components of vergence in participants with normal binocular vision.","authors":"Sebastian N Fine, Thomas Rutkowski, Elio M Santos, Suril Gohel, Farzin Hajebrahimi, Mitchell Scheiman, Tara L Alvarez","doi":"10.1167/jov.25.8.3","DOIUrl":"10.1167/jov.25.8.3","url":null,"abstract":"<p><p>Understanding the interplay of responses to stimulated accommodative blur (B), disparity (D), proximal (P), and diminished blur (-b), disparity (-d), and proximal (-p) cueing within binocularly normal participants is important for comparisons to patient populations. Recordings from 31 participants enrolled in the Convergence Insufficiency Neuro-mechanism Adult Population Study (NCT03593031) were collected. After artifact removal, analyses were performed on 20 BDP, 22 BD(-p), 27 BP(-d), 29 DP(-b), 24 B(-dp), 31 D(-bp), and 29 P(-bd) participant-level response datasets. Group-level statistics were assessed to evaluate the main effect of cue conditions on peak velocity (diopters/second) and final amplitude (diopters). Peak velocity assesses the preprogrammed portion of accommodation, whereas final amplitude assesses the feedback portion of accommodation. Post hoc pairwise comparisons were used to determine cue-to-cue significance. Significant main effects were found for final amplitude and peak velocity metrics (p < 0.05), indicating differences across cue conditions. Responses evoked by blur and disparity were comparable to those responses with all cues (BDP) for both far-to-near and near-to-far transitions. Responses evoked by blur or disparity cues elicited a reduced accommodative response, as indicated by peak velocity and final amplitude, compared to responses from blur and disparity cues. Blur and disparity cues can stimulate accommodative responses through the convergence accommodative/convergence crosslink. Results support significant contributions from blur and disparity cueing to accommodative responses compared with the proximal cue. This research forms the foundation for comparing accommodative responses in individuals with binocular vision dysfunctions.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12227023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144545829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Out of sight, out of mind: Spatial cueing reduces the attentional cost of emotional distractors in emotion-induced blindness. 眼不见,心不烦:空间线索减少了情绪诱发性失明中情绪干扰物的注意力成本。
IF 2.3 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.18
Divita Singh, Manushi Pandya, Debolina Chakraborty, Kishan Mehta
{"title":"Out of sight, out of mind: Spatial cueing reduces the attentional cost of emotional distractors in emotion-induced blindness.","authors":"Divita Singh, Manushi Pandya, Debolina Chakraborty, Kishan Mehta","doi":"10.1167/jov.25.8.18","DOIUrl":"10.1167/jov.25.8.18","url":null,"abstract":"<p><p>Emotional distractors significantly impact our goal-driven tasks, as demonstrated by the phenomenon of emotion induced blindness (EIB) and developing effective strategies to mitigate its impact has proven to be difficult. This study applied theoretical insights from spatial cueing studies to address the adverse impact of emotional distractors on EIB. We hypothesized that directing attention away from the distractor location via spatial cues would reduce the EIB. To test this, we used a cueing paradigm within the rapid serial visual presentation task, where the target was always cued correctly, but the distractor was only cued correctly 50% of the time. Results revealed a significant three-way interaction between cue validity, emotional valence, and stimulus-onset asynchrony. Specifically, emotional distractors produced less interference when they appeared in an invalidly cued location, supporting the hypothesis that spatially redirecting attention away from emotional distractors can reduce their impact on subsequent target processing. This effect was observed only for emotional distractors and not for neutral ones, highlighting the unique attentional \"stickiness\" of emotional stimuli. This study is the first to demonstrate that spatial cueing can reduce EIB and, importantly, offers novel evidence for an interaction between spatial and temporal attention in the presence of emotional stimuli.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"18"},"PeriodicalIF":2.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12306692/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144692224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stimulus-dependent delay of perceptual filling-in by microsaccades. 微跳知觉填充的刺激依赖延迟。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.8
Max Levinson, Christopher C Pack, Sylvain Baillet
{"title":"Stimulus-dependent delay of perceptual filling-in by microsaccades.","authors":"Max Levinson, Christopher C Pack, Sylvain Baillet","doi":"10.1167/jov.25.8.8","DOIUrl":"10.1167/jov.25.8.8","url":null,"abstract":"<p><p>Perception is a function of both stimulus features and active sensory sampling. The illusion of perceptual filling-in occurs when eye gaze is kept still: visual boundary perception may fail, causing adjacent visual features to remarkably merge into one uniform visual surface. Microsaccades-small, involuntary eye movements during gaze fixation-counteract perceptual filling-in, but the mechanisms underlying this process are not well-understood. We investigated whether microsaccade efficacy for preventing filling-in depends on two boundary properties, namely, color contrast and retinal eccentricity (distance from gaze center). Twenty-one human participants (male and female) fixated on a point until they experienced filling-in between two isoluminant colored surfaces. We found that increased color contrast independently extends the duration before filling-in, but does not alter the impact of individual microsaccades. Conversely, lower eccentricity delayed filling-in only by increasing microsaccade efficacy. We propose that microsaccades facilitate stable boundary perception via a transient retinal motion signal that scales with eccentricity but is invariant to boundary contrast. These results shed light on how incessant eye movements integrate with ongoing stimulus processing to stabilize perceptual detail, with implications for visual rehabilitation and the optimization of visual presentations in virtual and augmented reality environments.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"8"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12248979/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effects of simulated central and peripheral vision loss on naturalistic search. 模拟中枢和周边视力丧失对自然搜索的影响。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.6
Kirsten Veerkamp, Daniel Müller, Gwyneth A Pechler, David L Mann, Christian N L Olivers
{"title":"The effects of simulated central and peripheral vision loss on naturalistic search.","authors":"Kirsten Veerkamp, Daniel Müller, Gwyneth A Pechler, David L Mann, Christian N L Olivers","doi":"10.1167/jov.25.8.6","DOIUrl":"10.1167/jov.25.8.6","url":null,"abstract":"<p><p>Worldwide, millions of people experience central or peripheral vision loss. The consequences on daily visual functioning are not completely known, in particular because previous studies lacked real-life representativeness. Our aim was to examine the effects of simulated central or peripheral impairment on a range of measures underlying performance in a naturalistic visual search task in a three-dimensional (3D) environment. The task was performed in a 3D virtual reality (VR) supermarket environment while being seated in a swivel chair. We used gaze-contingent masks to simulate vision loss. Participants were allocated to one of three conditions: full vision, central vision loss (a 6° mask), or peripheral vision loss (a 6° aperture) in a between-subject design. Each participant performed four search sequences, each consisting of four target products from a memorized shopping list, under varying contrast levels. Besides search time and accuracy, we tracked navigational, oculomotor, head and torso movements to assess which cognitive and motor components contributed to performance differences. Results showed increased task completion times with simulated central and peripheral vision loss, but more so with peripheral loss. With central vision loss, navigation was less efficient and it took longer to verify targets. Furthermore, participants made more and shorter fixations. With peripheral vision loss, navigation was even less efficient, and it took longer to find and verify a target. Additionally, saccadic amplitudes were reduced. Low contrast particularly affected search with peripheral vision loss. Memory failure, indicating cognitive load, did not differ between conditions. Thus we demonstrate that simulations of central and peripheral vision loss lead to differential search profiles in a naturalistic 3D environment.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"6"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12236627/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144555562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Function over form: The temporal evolution of affordance-based scene categorization. 功能重于形式:基于可视性的场景分类的时间演变。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.10
Michelle R Greene, Bruce C Hansen
{"title":"Function over form: The temporal evolution of affordance-based scene categorization.","authors":"Michelle R Greene, Bruce C Hansen","doi":"10.1167/jov.25.8.10","DOIUrl":"10.1167/jov.25.8.10","url":null,"abstract":"<p><p>Humans can rapidly understand and categorize scenes, yet the specific features and mechanisms that enable categorization remain debated. Here, we investigated whether affordances-the possible actions a scene supports-facilitate scene categorization even when other similarly informative features are present. In Experiment 1, we generated triplets of images that were equally dissimilar on one feature dimension (affordances, materials, surfaces) but similar on the remaining two. Using an odd-one-out task, observers consistently chose the image that differed in its affordances as the outlier despite equally large differences in the other dimensions. In Experiment 2, we asked whether shared affordances also interfere with rapid categorization. When distractors shared affordances rather than surface features with a target category, observers committed significantly more false alarms, indicating that functional similarity creates stronger competition during scene categorization. Finally, in Experiment 3, we recorded ERPs to examine the time course of category representations. We used multivariate decoding to assess the quality of scene category representations. We found that both affordance- and surface-similar distractors yielded above-chance decoding starting around 60-70 ms after stimulus. However, the neural discriminability of target categories was reduced in the affordance-similar condition, starting around 150 ms. These findings suggest that affordances carry a privileged status in scene perception, shaping both behavioral category performance and neural processing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"10"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12248959/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Center-surround motion interaction between low and high spatial frequencies under binocular and dichoptic viewing. 双目和双视观察中低和高空间频率的中心-环绕运动相互作用。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.15
Omar Bachtoula, Ignacio Serrano-Pedraza
{"title":"Center-surround motion interaction between low and high spatial frequencies under binocular and dichoptic viewing.","authors":"Omar Bachtoula, Ignacio Serrano-Pedraza","doi":"10.1167/jov.25.8.15","DOIUrl":"10.1167/jov.25.8.15","url":null,"abstract":"<p><p>Motion discrimination of a stimulus that contains fine features is impaired when static coarser features are added to it. Previous findings have shown that this cross-scale motion interaction occurs under dichoptic presentation, where both components are spatially overlapped. Here, we used a center-surround spatial configuration where both components do not spatially overlap. We measured the strength of this motion interaction by assessing the cancellation speeds (i.e., the speed needed to cancel out the motion discrimination impairment) for different combinations of spatial frequencies, temporal frequencies, contrasts, durations, and under binocular and dichoptic presentations. The experiments revealed that cancellation speed is bandpass tuned to spatial frequency, increases with temporal frequency up to 12 Hz before slightly decreasing, and intensifies with contrast before stabilizing at higher levels. We found similar patterns of results for both dichoptic and binocular presentations, although the interaction was stronger in the binocular condition. These results confirm that this interaction mechanism can integrate fine and coarse scales when presented to different eyes, even when motion signals do not spatially overlap. Finally, we explain the differences between dichoptic and binocular cancellation speeds using a motion-sensing model that includes a cross-scale interaction stage. The model simulations suggest that an interocular gain control, followed by binocular summation and then by cross-scale interaction, accounts for the differences observed between binocular and dichoptic viewing.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12282640/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revealing temporal dynamics of the visuomotor system via continuous tracking of position and attribute. 通过位置和属性的连续跟踪揭示视觉运动系统的时间动态。
IF 2.3 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.19
Yen-Ju Chen, Zitang Sun, Shin'ya Nishida
{"title":"Revealing temporal dynamics of the visuomotor system via continuous tracking of position and attribute.","authors":"Yen-Ju Chen, Zitang Sun, Shin'ya Nishida","doi":"10.1167/jov.25.8.19","DOIUrl":"10.1167/jov.25.8.19","url":null,"abstract":"<p><p>Continuous tracking is the recently developed psychophysical technique for efficiently estimating human visual temporal characteristics. The standard version of the task, referred to as position tracking (PT), asks participants to track the location of a continuously moving target by a motor response (e.g., mouse movement). Some studies have also used a variant method, attribute tracking (AT), which requires participants to track and reproduce a continuously changing attribute (e.g., luminance) of the target instead of position. For both PT and AT, the temporal dynamics of the entire system from vision to action can be estimated from the cross-correlogram (CCG) of the trajectory between the stimulus and response. The similarities and differences in CCG between PT and AT, however, remain elusive but were examined in this study. Experiment 1 compared the two CCGs using luminance-defined circular patches, color-contrast-defined patches, and luminance-defined patches with various spatial frequencies. The results indicate that the PT response was faster and less affected by the stimulus variables than the AT response. Experiment 2 showed that these differences could be reduced by making the visuomotor mapping of PT less direct by reversing the motor response direction and by making the local stimulus change magnitude comparable between PT and AT. The comparison with the traditional reaction time measures (Experiment 3) further showed that the peak latency of CCG from PT aligned better with the simple reaction time, whereas that from AT aligned better with the choice reaction time. These results indicate that CCG is more sluggish for AT than for PT because AT includes the process of identifying the stimulus content (attribute change direction) and mapping it to a motor response arbitrarily specified by the experimenter, and because the effective stimulus change magnitude for AT is often weaker than that for PT. These findings provide a clearer understanding of the meaning of CCGs measured by the two types of continuous tracking tasks.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"19"},"PeriodicalIF":2.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12309616/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The visibility of Eidolon distortions in things and stuff. 幻象的可见性扭曲了事物。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.12
Swantje Mahncke, Lina Eicke-Kanani, Ole Fabritz, Thomas S A Wallis
{"title":"The visibility of Eidolon distortions in things and stuff.","authors":"Swantje Mahncke, Lina Eicke-Kanani, Ole Fabritz, Thomas S A Wallis","doi":"10.1167/jov.25.8.12","DOIUrl":"10.1167/jov.25.8.12","url":null,"abstract":"<p><p>The visibility of alterations to the physical structure of images (distortions) depends on the image content and on viewing conditions. Here we measure human sensitivity to a class of image distortions, Eidolons, applied to image sets containing a range of content, from object images or scenes, to textures and materials. In an odd-one-out task with peripherally presented images, we replicate previous findings that distortions are harder to detect in images which contain large regions of texture or material and fewer segmentable object boundaries. Next, we reason that an image-computable model able to capture the critical aspects of encoding transformations should be able to predict the discriminability of distortion-image pairs, irrespective of image content. We therefore test a variety of image-computable models, treating them as perceptual metrics, using a simple hierarchical regression framework. Of the tested models, the texture statistics of the Portilla and Simoncelli model best predicted performance, beating simple Fourier-spectrum-based transforms and a biologically inspired LGN statistics model. There remains, however, a substantial gap between the best single image-computable metric and an oracle model that has information about the experimental parameters and image labels. This work compliments existing datasets in image distortion discriminability and image quality, and extends existing frameworks for comparatively evaluating the predictive performance of perceptual metrics.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"12"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12255176/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144602132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational evidence for an inverse relationship between retinal and brain complexity. 视网膜和大脑复杂性之间反比关系的计算证据。
IF 2 4区 心理学
Journal of Vision Pub Date : 2025-07-01 DOI: 10.1167/jov.25.8.9
Mitchell B Slapik
{"title":"Computational evidence for an inverse relationship between retinal and brain complexity.","authors":"Mitchell B Slapik","doi":"10.1167/jov.25.8.9","DOIUrl":"10.1167/jov.25.8.9","url":null,"abstract":"<p><p>Visual neuroscientists have long observed an inverse relationship between brain and retinal complexity: As brain complexity increases across species, retinas adapt to simpler visual processing. Lindsey et al. previously provided a computational explanation for this pattern, showing that shallow networks encode complex features in their first stage of processing, whereas deep networks encode simpler features. Here, these findings are extended to a suite of representational analyses and show that shallow networks generate high-dimensional representations with linear decision boundaries and specific visual features that can feed directly into behavioral responses. In contrast, deep networks generate low-dimensional representations with nonlinear decision boundaries and general visual features. These representations require further processing before they can produce the appropriate behavioral response. In summary, the findings extend a longstanding principle linking simpler retinal features to complex brains and offer a computational framework for understanding neural network behavior more generally.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 8","pages":"9"},"PeriodicalIF":2.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12240199/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信