{"title":"节点标签匹配提高了深度信念网络的分类性能","authors":"Allan Campbell, V. Ciesielski, A. K. Qin","doi":"10.1109/IJCNN.2016.7727395","DOIUrl":null,"url":null,"abstract":"If output signals of artificial neural network classifiers are interpreted per node as class label predictors then partial knowledge encoded by the network during the learning procedure can be exploited in order to reassign which output node should represent each class label so that learning speed and final classification accuracy are improved. Our method for computing these reassignments is based on the maximum average correlation between actual node outputs and target labels over a small labeled validation dataset. Node Label Matching is an ancillary method for both supervised and unsupervised learning in artificial neural networks and we demonstrate its integration with Contrastive Divergence pre-training in Restricted Boltzmann Machines and Back Propagation fine-tuning in Deep Belief Networks. We introduce the Segmented Density Random Binary dataset and present empirical results of Node Label Matching on both our synthetic data and a subset of the MNIST benchmark.","PeriodicalId":109405,"journal":{"name":"2016 International Joint Conference on Neural Networks (IJCNN)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Node label matching improves classification performance in Deep Belief Networks\",\"authors\":\"Allan Campbell, V. Ciesielski, A. K. Qin\",\"doi\":\"10.1109/IJCNN.2016.7727395\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"If output signals of artificial neural network classifiers are interpreted per node as class label predictors then partial knowledge encoded by the network during the learning procedure can be exploited in order to reassign which output node should represent each class label so that learning speed and final classification accuracy are improved. Our method for computing these reassignments is based on the maximum average correlation between actual node outputs and target labels over a small labeled validation dataset. Node Label Matching is an ancillary method for both supervised and unsupervised learning in artificial neural networks and we demonstrate its integration with Contrastive Divergence pre-training in Restricted Boltzmann Machines and Back Propagation fine-tuning in Deep Belief Networks. We introduce the Segmented Density Random Binary dataset and present empirical results of Node Label Matching on both our synthetic data and a subset of the MNIST benchmark.\",\"PeriodicalId\":109405,\"journal\":{\"name\":\"2016 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"66 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2016.7727395\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2016.7727395","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Node label matching improves classification performance in Deep Belief Networks
If output signals of artificial neural network classifiers are interpreted per node as class label predictors then partial knowledge encoded by the network during the learning procedure can be exploited in order to reassign which output node should represent each class label so that learning speed and final classification accuracy are improved. Our method for computing these reassignments is based on the maximum average correlation between actual node outputs and target labels over a small labeled validation dataset. Node Label Matching is an ancillary method for both supervised and unsupervised learning in artificial neural networks and we demonstrate its integration with Contrastive Divergence pre-training in Restricted Boltzmann Machines and Back Propagation fine-tuning in Deep Belief Networks. We introduce the Segmented Density Random Binary dataset and present empirical results of Node Label Matching on both our synthetic data and a subset of the MNIST benchmark.