{"title":"学习选择性和不变表征的神经机制。","authors":"Fabio Anselmi, Ankit Patel, Lorenzo Rosasco","doi":"10.1186/s13408-020-00088-7","DOIUrl":null,"url":null,"abstract":"<p><p>Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success-supervised learning and the backpropagation algorithm-are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.</p>","PeriodicalId":54271,"journal":{"name":"Journal of Mathematical Neuroscience","volume":" ","pages":"12"},"PeriodicalIF":2.3000,"publicationDate":"2020-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s13408-020-00088-7","citationCount":"4","resultStr":"{\"title\":\"Neurally plausible mechanisms for learning selective and invariant representations.\",\"authors\":\"Fabio Anselmi, Ankit Patel, Lorenzo Rosasco\",\"doi\":\"10.1186/s13408-020-00088-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success-supervised learning and the backpropagation algorithm-are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.</p>\",\"PeriodicalId\":54271,\"journal\":{\"name\":\"Journal of Mathematical Neuroscience\",\"volume\":\" \",\"pages\":\"12\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2020-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1186/s13408-020-00088-7\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Mathematical Neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1186/s13408-020-00088-7\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Neuroscience\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Mathematical Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1186/s13408-020-00088-7","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Neuroscience","Score":null,"Total":0}
Neurally plausible mechanisms for learning selective and invariant representations.
Coding for visual stimuli in the ventral stream is known to be invariant to object identity preserving nuisance transformations. Indeed, much recent theoretical and experimental work suggests that the main challenge for the visual cortex is to build up such nuisance invariant representations. Recently, artificial convolutional networks have succeeded in both learning such invariant properties and, surprisingly, predicting cortical responses in macaque and mouse visual cortex with unprecedented accuracy. However, some of the key ingredients that enable such success-supervised learning and the backpropagation algorithm-are neurally implausible. This makes it difficult to relate advances in understanding convolutional networks to the brain. In contrast, many of the existing neurally plausible theories of invariant representations in the brain involve unsupervised learning, and have been strongly tied to specific plasticity rules. To close this gap, we study an instantiation of simple-complex cell model and show, for a broad class of unsupervised learning rules (including Hebbian learning), that we can learn object representations that are invariant to nuisance transformations belonging to a finite orthogonal group. These findings may have implications for developing neurally plausible theories and models of how the visual cortex or artificial neural networks build selectivity for discriminating objects and invariance to real-world nuisance transformations.
期刊介绍:
The Journal of Mathematical Neuroscience (JMN) publishes research articles on the mathematical modeling and analysis of all areas of neuroscience, i.e., the study of the nervous system and its dysfunctions. The focus is on using mathematics as the primary tool for elucidating the fundamental mechanisms responsible for experimentally observed behaviours in neuroscience at all relevant scales, from the molecular world to that of cognition. The aim is to publish work that uses advanced mathematical techniques to illuminate these questions.
It publishes full length original papers, rapid communications and review articles. Papers that combine theoretical results supported by convincing numerical experiments are especially encouraged.
Papers that introduce and help develop those new pieces of mathematical theory which are likely to be relevant to future studies of the nervous system in general and the human brain in particular are also welcome.