{"title":"Where-what网络1:“Where”和“what”通过自上而下的连接相互帮助","authors":"Zhengping Ji, J. Weng, D. Prokhorov","doi":"10.1109/DEVLRN.2008.4640806","DOIUrl":null,"url":null,"abstract":"This paper describes the design of a single learning network that integrates both object location (ldquowhererdquo) and object type (ldquowhatrdquo), from images of learned objects in natural complex backgrounds. The in-place learning algorithm is used to develop the internal representation (including synaptic bottom-up and top-down weights of every neuron) in the network, such that every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through interactions with other neurons in the same layer. In contrast with the previous fully connected MILN [13], the cells in each layer are locally connected in the network. Local analysis is achieved through multi-scale receptive fields, with increasing sizes of perception from earlier to later layers. The results of the experiments showed how one type of information (ldquowhererdquo or ldquowhatrdquo) assists the network to suppress irrelevant information from background (from ldquowhererdquo) or irrelevant object information (from ldquowhatrdquo), so as to give the required missing information (ldquowhererdquo or ldquowhatrdquo) in the motor output.","PeriodicalId":366099,"journal":{"name":"2008 7th IEEE International Conference on Development and Learning","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"63","resultStr":"{\"title\":\"Where-what network 1: “Where” and “what” assist each other through top-down connections\",\"authors\":\"Zhengping Ji, J. Weng, D. Prokhorov\",\"doi\":\"10.1109/DEVLRN.2008.4640806\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes the design of a single learning network that integrates both object location (ldquowhererdquo) and object type (ldquowhatrdquo), from images of learned objects in natural complex backgrounds. The in-place learning algorithm is used to develop the internal representation (including synaptic bottom-up and top-down weights of every neuron) in the network, such that every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through interactions with other neurons in the same layer. In contrast with the previous fully connected MILN [13], the cells in each layer are locally connected in the network. Local analysis is achieved through multi-scale receptive fields, with increasing sizes of perception from earlier to later layers. The results of the experiments showed how one type of information (ldquowhererdquo or ldquowhatrdquo) assists the network to suppress irrelevant information from background (from ldquowhererdquo) or irrelevant object information (from ldquowhatrdquo), so as to give the required missing information (ldquowhererdquo or ldquowhatrdquo) in the motor output.\",\"PeriodicalId\":366099,\"journal\":{\"name\":\"2008 7th IEEE International Conference on Development and Learning\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2008-10-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"63\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2008 7th IEEE International Conference on Development and Learning\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DEVLRN.2008.4640806\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 7th IEEE International Conference on Development and Learning","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DEVLRN.2008.4640806","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Where-what network 1: “Where” and “what” assist each other through top-down connections
This paper describes the design of a single learning network that integrates both object location (ldquowhererdquo) and object type (ldquowhatrdquo), from images of learned objects in natural complex backgrounds. The in-place learning algorithm is used to develop the internal representation (including synaptic bottom-up and top-down weights of every neuron) in the network, such that every neuron is responsible for the learning of its own signal processing characteristics within its connected network environment, through interactions with other neurons in the same layer. In contrast with the previous fully connected MILN [13], the cells in each layer are locally connected in the network. Local analysis is achieved through multi-scale receptive fields, with increasing sizes of perception from earlier to later layers. The results of the experiments showed how one type of information (ldquowhererdquo or ldquowhatrdquo) assists the network to suppress irrelevant information from background (from ldquowhererdquo) or irrelevant object information (from ldquowhatrdquo), so as to give the required missing information (ldquowhererdquo or ldquowhatrdquo) in the motor output.