Learning Representations of Animated Motion Sequences - A Neural Model

Top. Cogn. Sci. Pub Date : 1900-01-01 DOI:10.1111/tops.12075
Georg Layher, M. Giese, H. Neumann
{"title":"Learning Representations of Animated Motion Sequences - A Neural Model","authors":"Georg Layher, M. Giese, H. Neumann","doi":"10.1111/tops.12075","DOIUrl":null,"url":null,"abstract":"The detection and categorization of animate motions is a crucial task underlying social interaction and perceptual decision making. Neural representations of perceived animate objects are partially located in the primate cortical region STS, which is a region that receives convergent input from intermediate-level form and motion representations. Populations of STS cells exist which are selectively responsive to specific animated motion sequences, such as walkers. It is still unclear how and to what extent form and motion information contribute to the generation of such representations and what kind of mechanisms are involved in the learning processes. The article develops a cortical model architecture for the unsupervised learning of animated motion sequence representations. We demonstrate how the model automatically selects significant motion patterns as well as meaningful static form prototypes characterized by a high degree of articulation. Such key poses are selectively reinforced during learning through a cross talk between the motion and form processing streams. Furthermore, we show how sequence-selective representations are learned in STS by fusing static form and motion input from the segregated bottom-up driving input streams. Cells in STS, in turn, feed their activities recurrently to their input sites along top-down signal pathways. We show how such learned feedback connections enable predictions about future input as anticipation generated by sequence-selective STS cells. Network simulations demonstrate the computational capacity of the proposed model by reproducing several experimental findings from neurosciences and by accounting for recent behavioral data.","PeriodicalId":152645,"journal":{"name":"Top. Cogn. Sci.","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"22","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Top. Cogn. Sci.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/tops.12075","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 22

Abstract

The detection and categorization of animate motions is a crucial task underlying social interaction and perceptual decision making. Neural representations of perceived animate objects are partially located in the primate cortical region STS, which is a region that receives convergent input from intermediate-level form and motion representations. Populations of STS cells exist which are selectively responsive to specific animated motion sequences, such as walkers. It is still unclear how and to what extent form and motion information contribute to the generation of such representations and what kind of mechanisms are involved in the learning processes. The article develops a cortical model architecture for the unsupervised learning of animated motion sequence representations. We demonstrate how the model automatically selects significant motion patterns as well as meaningful static form prototypes characterized by a high degree of articulation. Such key poses are selectively reinforced during learning through a cross talk between the motion and form processing streams. Furthermore, we show how sequence-selective representations are learned in STS by fusing static form and motion input from the segregated bottom-up driving input streams. Cells in STS, in turn, feed their activities recurrently to their input sites along top-down signal pathways. We show how such learned feedback connections enable predictions about future input as anticipation generated by sequence-selective STS cells. Network simulations demonstrate the computational capacity of the proposed model by reproducing several experimental findings from neurosciences and by accounting for recent behavioral data.
动画动作序列的学习表征-一个神经模型
动物动作的检测和分类是社会互动和感知决策的重要基础。感知到的动物物体的神经表征部分位于灵长类皮层区域STS,这是一个接收来自中级形式和运动表征的收敛输入的区域。STS细胞群的存在是选择性地响应特定的动画运动序列,如步行者。目前尚不清楚形式和运动信息如何以及在多大程度上促进了这种表征的产生,以及学习过程中涉及什么样的机制。本文提出了一种用于动画动作序列表示的无监督学习的皮质模型架构。我们演示了该模型如何自动选择重要的运动模式以及具有高度清晰度的有意义的静态形式原型。这些关键的姿势是有选择性地加强在学习过程中通过运动和形式处理流之间的串扰。此外,我们展示了如何在STS中通过融合来自隔离的自下而上驱动输入流的静态形式和运动输入来学习序列选择表示。反过来,STS中的细胞沿着自上而下的信号通路将它们的活动循环地传递给它们的输入位点。我们展示了这种习得的反馈连接如何使对未来输入的预测成为序列选择性STS细胞产生的预期。网络模拟通过再现神经科学的几个实验结果和考虑最近的行为数据,证明了所提出模型的计算能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信