{"title":"A feed forward neural network with resolution properties for function approximation and modeling","authors":"P. H. F. D. Silva, E. Fernandes, A. Neto","doi":"10.1109/SBRN.2002.1181435","DOIUrl":null,"url":null,"abstract":"This paper attempts to the development of a novel feed forward artificial neural network paradigm. In its formulation, the hidden neurons were defined by the use of sample activation functions. The following function parameters were included: amplitude, width and translation. Further, the hidden neurons were classified as low and high resolution neurons, with global and local approximation properties, respectively. The gradient method was applied to obtain simple recursive relations for paradigm training. The results of the applications shown the interesting paradigm properties: (i) easy choice of neural network size; (ii) fast training; (iii) strong ability to perform complicated function approximation and nonlinear modeling.","PeriodicalId":157186,"journal":{"name":"VII Brazilian Symposium on Neural Networks, 2002. SBRN 2002. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"VII Brazilian Symposium on Neural Networks, 2002. SBRN 2002. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBRN.2002.1181435","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This paper attempts to the development of a novel feed forward artificial neural network paradigm. In its formulation, the hidden neurons were defined by the use of sample activation functions. The following function parameters were included: amplitude, width and translation. Further, the hidden neurons were classified as low and high resolution neurons, with global and local approximation properties, respectively. The gradient method was applied to obtain simple recursive relations for paradigm training. The results of the applications shown the interesting paradigm properties: (i) easy choice of neural network size; (ii) fast training; (iii) strong ability to perform complicated function approximation and nonlinear modeling.