{"title":"A hierarchical neural network involving nonlinear spectral processing","authors":"O. Ersoy, D. Hong","doi":"10.1109/IJCNN.1989.118514","DOIUrl":null,"url":null,"abstract":"Summary form only given, as follows. A new neural network architecture called the hierarchical neural network (HNN) is introduced. The HNN involves a number of stages in which each stage can be a particular neural network (SNN). Between two SNNs there is a nonlinear transformation of those input vectors rejected by the first SNN. The HNN has many desirable properties such as optimized system complexity in the sense of minimized number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all SNNs are operating simultaneously without waiting for data from each other. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the HNN.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"176 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International 1989 Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1989.118514","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Summary form only given, as follows. A new neural network architecture called the hierarchical neural network (HNN) is introduced. The HNN involves a number of stages in which each stage can be a particular neural network (SNN). Between two SNNs there is a nonlinear transformation of those input vectors rejected by the first SNN. The HNN has many desirable properties such as optimized system complexity in the sense of minimized number of stages, high classification accuracy, minimized learning and recall times, and truly parallel architectures in which all SNNs are operating simultaneously without waiting for data from each other. The experiments performed in comparison to multilayered networks with backpropagation training indicated the superiority of the HNN.<>