{"title":"Hidden node activation differential-a new neural network relevancy criteria","authors":"Patrick Chan Khue Hiang","doi":"10.1109/KES.1997.616920","DOIUrl":null,"url":null,"abstract":"Neural networks have been used in many problems such as character recognition, time series forecasting and image coding. The generalisation of the network depends on its internal structure. Network parameters should be set correctly so that data outside the class will not be overfitted. One mechanism to achieve an optimal neural network structure is to identify the essential components (hidden nodes) and to prune off the irrelevant ones. Most of the proposed criteria used for pruning are expensive to compute and impractical to use for large networks and large training samples. In this paper, a new relevancy criteria is proposed and three existing criteria are investigated. The properties of the proposed criteria are covered in detail and their similarities to existing criteria are illustrated.","PeriodicalId":166931,"journal":{"name":"Proceedings of 1st International Conference on Conventional and Knowledge Based Intelligent Electronic Systems. KES '97","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of 1st International Conference on Conventional and Knowledge Based Intelligent Electronic Systems. KES '97","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KES.1997.616920","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Neural networks have been used in many problems such as character recognition, time series forecasting and image coding. The generalisation of the network depends on its internal structure. Network parameters should be set correctly so that data outside the class will not be overfitted. One mechanism to achieve an optimal neural network structure is to identify the essential components (hidden nodes) and to prune off the irrelevant ones. Most of the proposed criteria used for pruning are expensive to compute and impractical to use for large networks and large training samples. In this paper, a new relevancy criteria is proposed and three existing criteria are investigated. The properties of the proposed criteria are covered in detail and their similarities to existing criteria are illustrated.