{"title":"Indicators of hidden neuron functionality: the weight matrix versus neuron behaviour","authors":"Tom Gedeon","doi":"10.1109/ANNES.1995.499431","DOIUrl":null,"url":null,"abstract":"Pruning of redundant or less important hidden neurons from the popular backpropagation trained neural networks is useful for a host of reasons, ranging from improvements of generalisation performance, to use as a precursor for rule extraction. For pruning it is necessary to identify hidden neurons with similar functionality. We have previously used a pruning process based on the behaviour of the hidden neurons in an image processing application to produce a quality driven compression by eliminating the least different hidden neurons. We consider the computationally cheaper alternative using only the trained weight matrix of the neural networks at each stage of the compression process. We conclude that the weight matrix is not sufficient for differentiating the functionality of the hidden neurons for this task, being essentially the functional equivalence problem which is computationally intractable.","PeriodicalId":123427,"journal":{"name":"Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 1995 Second New Zealand International Two-Stream Conference on Artificial Neural Networks and Expert Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ANNES.1995.499431","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9
Abstract
Pruning of redundant or less important hidden neurons from the popular backpropagation trained neural networks is useful for a host of reasons, ranging from improvements of generalisation performance, to use as a precursor for rule extraction. For pruning it is necessary to identify hidden neurons with similar functionality. We have previously used a pruning process based on the behaviour of the hidden neurons in an image processing application to produce a quality driven compression by eliminating the least different hidden neurons. We consider the computationally cheaper alternative using only the trained weight matrix of the neural networks at each stage of the compression process. We conclude that the weight matrix is not sufficient for differentiating the functionality of the hidden neurons for this task, being essentially the functional equivalence problem which is computationally intractable.