{"title":"神经网络中隐藏节点的最优数量","authors":"N. Wanas, G. Auda, M. Kamel, F. Karray","doi":"10.1109/CCECE.1998.685648","DOIUrl":null,"url":null,"abstract":"In this study we show, empirically, that the best performance of a neural network occurs when the number of hidden nodes is equal to log(T), where T is the number of training samples. This value represents the optimal performance of the neural network as well as the optimal associated computational cost. We also show that the measure of entropy in the hidden layer not only gives a good foresight to the performance of the neural network, but can be used as a criteria to optimize the neural network as well. This can be achieved by minimizing the network entropy (i.e. maximizing the entropy in the hidden layer) as a means of modifying the weights of the neural network.","PeriodicalId":177613,"journal":{"name":"Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No.98TH8341)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"140","resultStr":"{\"title\":\"On the optimal number of hidden nodes in a neural network\",\"authors\":\"N. Wanas, G. Auda, M. Kamel, F. Karray\",\"doi\":\"10.1109/CCECE.1998.685648\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study we show, empirically, that the best performance of a neural network occurs when the number of hidden nodes is equal to log(T), where T is the number of training samples. This value represents the optimal performance of the neural network as well as the optimal associated computational cost. We also show that the measure of entropy in the hidden layer not only gives a good foresight to the performance of the neural network, but can be used as a criteria to optimize the neural network as well. This can be achieved by minimizing the network entropy (i.e. maximizing the entropy in the hidden layer) as a means of modifying the weights of the neural network.\",\"PeriodicalId\":177613,\"journal\":{\"name\":\"Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No.98TH8341)\",\"volume\":\"10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"140\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No.98TH8341)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CCECE.1998.685648\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Conference Proceedings. IEEE Canadian Conference on Electrical and Computer Engineering (Cat. No.98TH8341)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CCECE.1998.685648","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On the optimal number of hidden nodes in a neural network
In this study we show, empirically, that the best performance of a neural network occurs when the number of hidden nodes is equal to log(T), where T is the number of training samples. This value represents the optimal performance of the neural network as well as the optimal associated computational cost. We also show that the measure of entropy in the hidden layer not only gives a good foresight to the performance of the neural network, but can be used as a criteria to optimize the neural network as well. This can be achieved by minimizing the network entropy (i.e. maximizing the entropy in the hidden layer) as a means of modifying the weights of the neural network.