{"title":"用序列极限的概念研究神经网络权值的有界性","authors":"Hazem Migdady","doi":"10.5121/ijdkp.2014.4301","DOIUrl":null,"url":null,"abstract":"feed forward neural network with backpropagation learning algorithm is considered as a black box learning classifier since there is no certain interpretation or anticipation of the behavior of a neural network weights. The weights of a neural network are considered as the learning tool of the classifier, and the learning task is performed by the repetition modification of those weights. This modification is performed using the delta rule which is mainly used in the gradient descent technique. In this article a proof is provided that helps to understand and explain the behavior of the weights in a feed forward neural network with backpropagation learning algorithm. Also, it illustrates why a feed forward neural network is not always guaranteed to converge in a global minimum. Moreover, the proof shows that the weights in the neural network are upper bounded (i.e. they do not approach infinity).","PeriodicalId":131153,"journal":{"name":"International Journal of Data Mining & Knowledge Management Process","volume":"103 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Boundness of a Neural Network Weights Using the Notion of a Limit of a Sequence\",\"authors\":\"Hazem Migdady\",\"doi\":\"10.5121/ijdkp.2014.4301\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"feed forward neural network with backpropagation learning algorithm is considered as a black box learning classifier since there is no certain interpretation or anticipation of the behavior of a neural network weights. The weights of a neural network are considered as the learning tool of the classifier, and the learning task is performed by the repetition modification of those weights. This modification is performed using the delta rule which is mainly used in the gradient descent technique. In this article a proof is provided that helps to understand and explain the behavior of the weights in a feed forward neural network with backpropagation learning algorithm. Also, it illustrates why a feed forward neural network is not always guaranteed to converge in a global minimum. Moreover, the proof shows that the weights in the neural network are upper bounded (i.e. they do not approach infinity).\",\"PeriodicalId\":131153,\"journal\":{\"name\":\"International Journal of Data Mining & Knowledge Management Process\",\"volume\":\"103 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-05-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Data Mining & Knowledge Management Process\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5121/ijdkp.2014.4301\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Data Mining & Knowledge Management Process","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5121/ijdkp.2014.4301","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Boundness of a Neural Network Weights Using the Notion of a Limit of a Sequence
feed forward neural network with backpropagation learning algorithm is considered as a black box learning classifier since there is no certain interpretation or anticipation of the behavior of a neural network weights. The weights of a neural network are considered as the learning tool of the classifier, and the learning task is performed by the repetition modification of those weights. This modification is performed using the delta rule which is mainly used in the gradient descent technique. In this article a proof is provided that helps to understand and explain the behavior of the weights in a feed forward neural network with backpropagation learning algorithm. Also, it illustrates why a feed forward neural network is not always guaranteed to converge in a global minimum. Moreover, the proof shows that the weights in the neural network are upper bounded (i.e. they do not approach infinity).