{"title":"特征检测神经元发育的机制","authors":"F. Peper, H. Noda","doi":"10.1109/ANNES.1995.499439","DOIUrl":null,"url":null,"abstract":"The mammalian retina and visual cortex contain feature detecting neurons with a remarkable similarity to neurons in artificial neural networks for principal component analysis. Hebbian-type learning is one of the mechanisms responsible for the development of such neurons. It does not model, however, control of the number of neurons that develop in response to input. We propose a mechanism that adaptively controls this number. The mechanism utilizes the variances of neurons' outputs and encodes them as the lengths of the neural network's synaptic weight vectors, thus allowing only the synapses of those neurons to develop that represent significant information about the neural network's input and suppressing neurons' synapses that don't.","PeriodicalId":123427,"journal":{"name":"Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems","volume":"119 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"A mechanism for the development of feature detecting neurons\",\"authors\":\"F. Peper, H. Noda\",\"doi\":\"10.1109/ANNES.1995.499439\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The mammalian retina and visual cortex contain feature detecting neurons with a remarkable similarity to neurons in artificial neural networks for principal component analysis. Hebbian-type learning is one of the mechanisms responsible for the development of such neurons. It does not model, however, control of the number of neurons that develop in response to input. We propose a mechanism that adaptively controls this number. The mechanism utilizes the variances of neurons' outputs and encodes them as the lengths of the neural network's synaptic weight vectors, thus allowing only the synapses of those neurons to develop that represent significant information about the neural network's input and suppressing neurons' synapses that don't.\",\"PeriodicalId\":123427,\"journal\":{\"name\":\"Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems\",\"volume\":\"119 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1995-11-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ANNES.1995.499439\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ANNES.1995.499439","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A mechanism for the development of feature detecting neurons
The mammalian retina and visual cortex contain feature detecting neurons with a remarkable similarity to neurons in artificial neural networks for principal component analysis. Hebbian-type learning is one of the mechanisms responsible for the development of such neurons. It does not model, however, control of the number of neurons that develop in response to input. We propose a mechanism that adaptively controls this number. The mechanism utilizes the variances of neurons' outputs and encodes them as the lengths of the neural network's synaptic weight vectors, thus allowing only the synapses of those neurons to develop that represent significant information about the neural network's input and suppressing neurons' synapses that don't.