{"title":"分布式反向传播实现训练的性能评估","authors":"S. Babii","doi":"10.1109/SACI.2007.375524","DOIUrl":null,"url":null,"abstract":"This paper presents the results of some experiments in parallelizing the training phase of a feed-forward, artificial neural network. More specifically, we develop and analyze a parallelization strategy of the widely used neural net learning algorithm called back-propagation. We describe an approach for parallelizing the back- propagation algorithm. We implemented these algorithms on several LANs, permitting us to evaluate and analyze their performances based on the results of actual runs. We were interested on the qualitative aspect of the analysis, in order to achieve a fair understanding of the factors determining the behavior of this parallel algorithms. We were interested in discovering and dealing with some of the specific circumstances that have to be considered when a parallelized neural net learning algorithm is to be implemented on a set of workstations in a LAN. Part of our purpose is to investigate whether it is possible to exploit the computational resources of such a set of workstations.","PeriodicalId":138224,"journal":{"name":"2007 4th International Symposium on Applied Computational Intelligence and Informatics","volume":"500 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Performance Evaluation for Training a Distributed BackPropagation Implementation\",\"authors\":\"S. Babii\",\"doi\":\"10.1109/SACI.2007.375524\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents the results of some experiments in parallelizing the training phase of a feed-forward, artificial neural network. More specifically, we develop and analyze a parallelization strategy of the widely used neural net learning algorithm called back-propagation. We describe an approach for parallelizing the back- propagation algorithm. We implemented these algorithms on several LANs, permitting us to evaluate and analyze their performances based on the results of actual runs. We were interested on the qualitative aspect of the analysis, in order to achieve a fair understanding of the factors determining the behavior of this parallel algorithms. We were interested in discovering and dealing with some of the specific circumstances that have to be considered when a parallelized neural net learning algorithm is to be implemented on a set of workstations in a LAN. Part of our purpose is to investigate whether it is possible to exploit the computational resources of such a set of workstations.\",\"PeriodicalId\":138224,\"journal\":{\"name\":\"2007 4th International Symposium on Applied Computational Intelligence and Informatics\",\"volume\":\"500 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 4th International Symposium on Applied Computational Intelligence and Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SACI.2007.375524\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 4th International Symposium on Applied Computational Intelligence and Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SACI.2007.375524","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance Evaluation for Training a Distributed BackPropagation Implementation
This paper presents the results of some experiments in parallelizing the training phase of a feed-forward, artificial neural network. More specifically, we develop and analyze a parallelization strategy of the widely used neural net learning algorithm called back-propagation. We describe an approach for parallelizing the back- propagation algorithm. We implemented these algorithms on several LANs, permitting us to evaluate and analyze their performances based on the results of actual runs. We were interested on the qualitative aspect of the analysis, in order to achieve a fair understanding of the factors determining the behavior of this parallel algorithms. We were interested in discovering and dealing with some of the specific circumstances that have to be considered when a parallelized neural net learning algorithm is to be implemented on a set of workstations in a LAN. Part of our purpose is to investigate whether it is possible to exploit the computational resources of such a set of workstations.