Visual statistical learning based on a coupled shape-position recurrent neural network model.

IF 3.1 3区 工程技术 Q2 NEUROSCIENCES
Cognitive Neurodynamics Pub Date : 2025-12-01 Epub Date: 2025-06-17 DOI:10.1007/s11571-025-10285-3
Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan
{"title":"Visual statistical learning based on a coupled shape-position recurrent neural network model.","authors":"Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan","doi":"10.1007/s11571-025-10285-3","DOIUrl":null,"url":null,"abstract":"<p><p>The visual system has the ability to learn the statistical regularities (temporal and/or spatial) that characterize the visual scene automatically and implicitly. This ability is referred to as the visual statistical learning (VSL). The VSL could group several objects that have fixed statistical properties into a chunk. This complex process relies on the collaborative involvement of multiple brain regions that work together to learn the chunk. Although behavioral experiments have explored cognitive functions of the VSL, its computational mechanisms remain poorly understood. To address this issue, this study proposes a coupled shape-position recurrent neural network model based on the anatomical structure of the visual system to explain how chunk information is learned and represented in neural networks. The model comprises three core modules: the position network, which encodes object position information; the shape network, which encodes object shape information; and the decision network, which integrates the neuronal activity in the position and shape networks to make decisions. The model successfully simulates the results of a classic spatial VSL experiment. The distribution of neural firing rates in the decision network shows a significant difference between chunk and non-chunk conditions. Specifically, these neurons in the chunk condition exhibit stronger firing rates than those in the non-chunk condition. Furthermore, after the model learns a scene containing both chunk and non-chunk stimuli, neurons in the position network selectively encode far and near stimuli, respectively. In contrast, neurons in the shape network distinguish between chunk and non-chunk. The chunk encoding neurons selectively respond to specific chunks. These results indicate that the proposed model is able to learn spatial regularities of the stimuli to discriminate chunks from non-chunks, and neurons in the shape network selectively respond to chuck and non-chunk information. These findings offer important theoretical insights into the representation mechanisms of chunk information in neural networks and propose a new framework for modeling spatial VSL.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"96"},"PeriodicalIF":3.1000,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12174023/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Neurodynamics","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11571-025-10285-3","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/6/17 0:00:00","PubModel":"Epub","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

The visual system has the ability to learn the statistical regularities (temporal and/or spatial) that characterize the visual scene automatically and implicitly. This ability is referred to as the visual statistical learning (VSL). The VSL could group several objects that have fixed statistical properties into a chunk. This complex process relies on the collaborative involvement of multiple brain regions that work together to learn the chunk. Although behavioral experiments have explored cognitive functions of the VSL, its computational mechanisms remain poorly understood. To address this issue, this study proposes a coupled shape-position recurrent neural network model based on the anatomical structure of the visual system to explain how chunk information is learned and represented in neural networks. The model comprises three core modules: the position network, which encodes object position information; the shape network, which encodes object shape information; and the decision network, which integrates the neuronal activity in the position and shape networks to make decisions. The model successfully simulates the results of a classic spatial VSL experiment. The distribution of neural firing rates in the decision network shows a significant difference between chunk and non-chunk conditions. Specifically, these neurons in the chunk condition exhibit stronger firing rates than those in the non-chunk condition. Furthermore, after the model learns a scene containing both chunk and non-chunk stimuli, neurons in the position network selectively encode far and near stimuli, respectively. In contrast, neurons in the shape network distinguish between chunk and non-chunk. The chunk encoding neurons selectively respond to specific chunks. These results indicate that the proposed model is able to learn spatial regularities of the stimuli to discriminate chunks from non-chunks, and neurons in the shape network selectively respond to chuck and non-chunk information. These findings offer important theoretical insights into the representation mechanisms of chunk information in neural networks and propose a new framework for modeling spatial VSL.

基于形状-位置耦合递归神经网络模型的视觉统计学习。
视觉系统有能力学习统计规律(时间和/或空间),自动和隐式地表征视觉场景。这种能力被称为视觉统计学习(VSL)。VSL可以将几个具有固定统计属性的对象分组到一个块中。这个复杂的过程依赖于多个大脑区域的协同参与,这些区域一起工作来学习大块。虽然行为实验已经探索了VSL的认知功能,但其计算机制仍然知之甚少。为了解决这一问题,本研究提出了一种基于视觉系统解剖结构的耦合形状-位置递归神经网络模型,以解释神经网络如何学习和表示块信息。该模型包括三个核心模块:位置网络,对目标位置信息进行编码;形状网络,对物体形状信息进行编码;还有决策网络,它整合了位置和形状网络中的神经元活动来做出决策。该模型成功地模拟了经典空间VSL实验的结果。决策网络中的神经放电率分布在分块和非分块条件下存在显著差异。具体来说,这些神经元在组块条件下表现出比非组块条件下更强的放电率。此外,在模型学习了包含块和非块刺激的场景后,位置网络中的神经元分别选择性地编码远刺激和近刺激。相反,形状网络中的神经元区分块和非块。编码块的神经元选择性地对特定块做出反应。这些结果表明,该模型能够学习刺激的空间规律,区分块和非块,并且形状网络中的神经元有选择地响应恰克和非块信息。这些发现为神经网络中块信息的表示机制提供了重要的理论见解,并提出了空间VSL建模的新框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Cognitive Neurodynamics
Cognitive Neurodynamics 医学-神经科学
CiteScore
6.90
自引率
18.90%
发文量
140
审稿时长
12 months
期刊介绍: Cognitive Neurodynamics provides a unique forum of communication and cooperation for scientists and engineers working in the field of cognitive neurodynamics, intelligent science and applications, bridging the gap between theory and application, without any preference for pure theoretical, experimental or computational models. The emphasis is to publish original models of cognitive neurodynamics, novel computational theories and experimental results. In particular, intelligent science inspired by cognitive neuroscience and neurodynamics is also very welcome. The scope of Cognitive Neurodynamics covers cognitive neuroscience, neural computation based on dynamics, computer science, intelligent science as well as their interdisciplinary applications in the natural and engineering sciences. Papers that are appropriate for non-specialist readers are encouraged. 1. There is no page limit for manuscripts submitted to Cognitive Neurodynamics. Research papers should clearly represent an important advance of especially broad interest to researchers and technologists in neuroscience, biophysics, BCI, neural computer and intelligent robotics. 2. Cognitive Neurodynamics also welcomes brief communications: short papers reporting results that are of genuinely broad interest but that for one reason and another do not make a sufficiently complete story to justify a full article publication. Brief Communications should consist of approximately four manuscript pages. 3. Cognitive Neurodynamics publishes review articles in which a specific field is reviewed through an exhaustive literature survey. There are no restrictions on the number of pages. Review articles are usually invited, but submitted reviews will also be considered.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信