Neurally plausible mechanisms for learning selective and invariant representations.

IF 2.3 4区 医学 Q1 Neuroscience
Fabio Anselmi, Ankit Patel, Lorenzo Rosasco
{"title":"Neurally plausible mechanisms for learning selective and invariant representations.","authors":"Fabio Anselmi,&nbsp;Ankit Patel,&nbsp;Lorenzo Rosasco","doi":"10.1186/s13408-020-00088-7","DOIUrl":null,"url":null,"abstract":"<p><p>Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success-supervised learning and the backpropagation algorithm-are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":" ","pages":"12"},"PeriodicalIF":2.3000,"publicationDate":"2020-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-020-00088-7","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mathematical Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13408-020-00088-7","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Neuroscience","Score":null,"Total":0}
引用次数: 4

Abstract

Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success-supervised learning and the backpropagation algorithm-are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.

Abstract Image

学习选择性和不变表征的神经机制。
已知腹侧流中视觉刺激的编码对保持对象身份的干扰变换是不变的。事实上,最近的许多理论和实验工作表明,视觉皮层面临的主要挑战是建立这种令人讨厌的不变表征。最近,人工卷积网络已经成功地学习了这些不变的特性,令人惊讶的是,它还以前所未有的准确性预测了猕猴和小鼠视觉皮层的皮层反应。然而,促成这种成功的一些关键因素——监督学习和反向传播算法——在神经学上是难以置信的。这使得很难将理解卷积网络的进展与大脑联系起来。相比之下,许多现有的关于大脑中不变表征的神经学理论都涉及无监督学习,并且与特定的可塑性规则密切相关。为了缩小这一差距,我们研究了一个简单-复杂单元模型的实例,并表明,对于一类广泛的无监督学习规则(包括Hebbian学习),我们可以学习对属于有限正交群的讨厌变换不变的对象表示。这些发现可能对发展神经上似是而非的理论和模型有启示意义,这些理论和模型是关于视觉皮层或人工神经网络如何建立选择性来区分物体和对现实世界中讨厌的转换的不变性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Mathematical Neuroscience
Journal of Mathematical Neuroscience Neuroscience-Neuroscience (miscellaneous)
自引率
0.00%
发文量
0
审稿时长
13 weeks
期刊介绍: The Journal of Mathematical Neuroscience (JMN) publishes research articles on the mathematical modeling and analysis of all areas of neuroscience, i.e., the study of the nervous system and its dysfunctions. The focus is on using mathematics as the primary tool for elucidating the fundamental mechanisms responsible for experimentally observed behaviours in neuroscience at all relevant scales, from the molecular world to that of cognition. The aim is to publish work that uses advanced mathematical techniques to illuminate these questions. It publishes full length original papers, rapid communications and review articles. Papers that combine theoretical results supported by convincing numerical experiments are especially encouraged. Papers that introduce and help develop those new pieces of mathematical theory which are likely to be relevant to future studies of the nervous system in general and the human brain in particular are also welcome.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信