{"title":"Where do I go? Decoding temporal neural dynamics of scene processing and visuospatial memory interactions using convolutional neural networks.","authors":"Clément Naveilhan, Raphaël Zory, Stephen Ramanoël","doi":"10.1167/jov.25.10.15","DOIUrl":null,"url":null,"abstract":"<p><p>Visual scene perception enables rapid interpretation of the surrounding environment by integrating multiple visual features related to task demands and context, which is essential for goal-directed behavior. In the present work, we investigated the temporal neural dynamics underlying the interaction between the processing of bottom-up visual features and top-down contextual knowledge during scene perception. We asked whether newly acquired spatial knowledge would immediately modulate the early neural responses involved in the extraction of navigational affordances available (i.e., the number of open doors). For this purpose, we analyzed electroencephalographic data from 30 participants performing interleaved blocks of a scene memory task and a visuospatial memory task in which we manipulated the number of navigational affordances available. We used convolutional neural networks coupled with gradient-weighted class activation mapping to assess the main electroencephalographic channels and time points contributing to the classification performances. The results indicated an early temporal window of integration in occipitoparietal activity (50-250 ms post stimulus) for several aspects of visual perception, including scene color and number of affordances, as well as for spatial memory content. Moreover, a convolutional neural network trained to detect affordances in the scene memory task failed to generalize to detect the same affordances after participants learned spatial information about goal position within the scene. Taken together, these results reveal an early common window of integration for scene and visuospatial memory information, with a specific and immediate top-down influence of newly acquired spatial knowledge on early neural correlates of scene perception.</p>","PeriodicalId":49955,"journal":{"name":"Journal of Vision","volume":"25 10","pages":"15"},"PeriodicalIF":2.3000,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12400970/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Vision","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1167/jov.25.10.15","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"OPHTHALMOLOGY","Score":null,"Total":0}
引用次数: 0
Abstract
Visual scene perception enables rapid interpretation of the surrounding environment by integrating multiple visual features related to task demands and context, which is essential for goal-directed behavior. In the present work, we investigated the temporal neural dynamics underlying the interaction between the processing of bottom-up visual features and top-down contextual knowledge during scene perception. We asked whether newly acquired spatial knowledge would immediately modulate the early neural responses involved in the extraction of navigational affordances available (i.e., the number of open doors). For this purpose, we analyzed electroencephalographic data from 30 participants performing interleaved blocks of a scene memory task and a visuospatial memory task in which we manipulated the number of navigational affordances available. We used convolutional neural networks coupled with gradient-weighted class activation mapping to assess the main electroencephalographic channels and time points contributing to the classification performances. The results indicated an early temporal window of integration in occipitoparietal activity (50-250 ms post stimulus) for several aspects of visual perception, including scene color and number of affordances, as well as for spatial memory content. Moreover, a convolutional neural network trained to detect affordances in the scene memory task failed to generalize to detect the same affordances after participants learned spatial information about goal position within the scene. Taken together, these results reveal an early common window of integration for scene and visuospatial memory information, with a specific and immediate top-down influence of newly acquired spatial knowledge on early neural correlates of scene perception.
期刊介绍:
Exploring all aspects of biological visual function, including spatial vision, perception,
low vision, color vision and more, spanning the fields of neuroscience, psychology and psychophysics.