Where do I go? Decoding temporal neural dynamics of scene processing and visuospatial memory interactions using convolutional neural networks.

IF 2.3 4区 心理学 Q2 OPHTHALMOLOGY
Clément Naveilhan, Raphaël Zory, Stephen Ramanoël
{"title":"Where do I go? Decoding temporal neural dynamics of scene processing and visuospatial memory interactions using convolutional neural networks.","authors":"Clément Naveilhan, Raphaël Zory, Stephen Ramanoël","doi":"10.1167/jov.25.10.15","DOIUrl":null,"url":null,"abstract":"<p><p>Visual scene perception enables rapid interpretation of the surrounding environment by integrating multiple visual features related to task demands and context, which is essential for goal-directed behavior. In the present work, we investigated the temporal neural dynamics underlying the interaction between the processing of bottom-up visual features and top-down contextual knowledge during scene perception. We asked whether newly acquired spatial knowledge would immediately modulate the early neural responses involved in the extraction of navigational affordances available (i.e., the number of open doors). For this purpose, we analyzed electroencephalographic data from 30 participants performing interleaved blocks of a scene memory task and a visuospatial memory task in which we manipulated the number of navigational affordances available. We used convolutional neural networks coupled with gradient-weighted class activation mapping to assess the main electroencephalographic channels and time points contributing to the classification performances. The results indicated an early temporal window of integration in occipitoparietal activity (50-250 ms post stimulus) for several aspects of visual perception, including scene color and number of affordances, as well as for spatial memory content. Moreover, a convolutional neural network trained to detect affordances in the scene memory task failed to generalize to detect the same affordances after participants learned spatial information about goal position within the scene. Taken together, these results reveal an early common window of integration for scene and visuospatial memory information, with a specific and immediate top-down influence of newly acquired spatial knowledge on early neural correlates of scene perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"15"},"PeriodicalIF":2.3000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12400970/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.25.10.15","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Visual scene perception enables rapid interpretation of the surrounding environment by integrating multiple visual features related to task demands and context, which is essential for goal-directed behavior. In the present work, we investigated the temporal neural dynamics underlying the interaction between the processing of bottom-up visual features and top-down contextual knowledge during scene perception. We asked whether newly acquired spatial knowledge would immediately modulate the early neural responses involved in the extraction of navigational affordances available (i.e., the number of open doors). For this purpose, we analyzed electroencephalographic data from 30 participants performing interleaved blocks of a scene memory task and a visuospatial memory task in which we manipulated the number of navigational affordances available. We used convolutional neural networks coupled with gradient-weighted class activation mapping to assess the main electroencephalographic channels and time points contributing to the classification performances. The results indicated an early temporal window of integration in occipitoparietal activity (50-250 ms post stimulus) for several aspects of visual perception, including scene color and number of affordances, as well as for spatial memory content. Moreover, a convolutional neural network trained to detect affordances in the scene memory task failed to generalize to detect the same affordances after participants learned spatial information about goal position within the scene. Taken together, these results reveal an early common window of integration for scene and visuospatial memory information, with a specific and immediate top-down influence of newly acquired spatial knowledge on early neural correlates of scene perception.

Abstract Image

Abstract Image

Abstract Image

我该去哪里?使用卷积神经网络解码场景处理和视觉空间记忆交互作用的时间神经动力学。
视觉场景感知通过整合与任务需求和上下文相关的多种视觉特征,能够快速解释周围环境,这对于目标导向行为至关重要。在本研究中,我们研究了场景感知过程中自下而上的视觉特征加工和自上而下的语境知识之间相互作用的时间神经动力学。我们询问新获得的空间知识是否会立即调节涉及提取可用导航能力(即打开的门的数量)的早期神经反应。为此,我们分析了30名参与者执行场景记忆任务和视觉空间记忆任务的交错块的脑电图数据,其中我们操纵了可用的导航功能的数量。我们使用卷积神经网络结合梯度加权类激活映射来评估对分类性能有贡献的主要脑电通道和时间点。结果表明,在刺激后50-250 ms,枕顶活动在视觉感知的几个方面(包括场景颜色和可视性数量)以及空间记忆内容上存在早期的时间整合窗口。此外,在场景记忆任务中训练的卷积神经网络在被试学习了目标位置的空间信息后,不能泛化到检测相同的情景记忆任务中。综上所述,这些结果揭示了场景和视觉空间记忆信息整合的早期共同窗口,以及新获得的空间知识对场景感知早期神经相关的具体和直接的自上而下影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Vision
Journal of Vision 医学-眼科学
CiteScore
2.90
自引率
5.60%
发文量
218
审稿时长
3-6 weeks
期刊介绍: Exploring all aspects of biological visual function, including spatial vision, perception, low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信