{"title":"计算集群上神经网络并行批处理模式训练","authors":"V. Turchenko, L. Grandinetti, A. Sachenko","doi":"10.1109/HPCSim.2012.6266912","DOIUrl":null,"url":null,"abstract":"The research of a parallelization efficiency of a batch pattern training algorithm of a multilayer perceptron on computational clusters is presented in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is presented. The efficiency of parallelization of the developed algorithm is investigated on the progressive increasing the dimension of the parallelized problem. The results of the experimental researches show that (i) the cluster with Infiniband interconnection shows better values of parallelization efficiency in comparison with general-purpose parallel computer with cc numa architecture due to lower communication overhead and (ii) the parallelization efficiency of the algorithm is high enough for its appropriate usage on general-purpose clusters and parallel computers available within modern computational grids.","PeriodicalId":428764,"journal":{"name":"2012 International Conference on High Performance Computing & Simulation (HPCS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Parallel batch pattern training of neural networks on computational clusters\",\"authors\":\"V. Turchenko, L. Grandinetti, A. Sachenko\",\"doi\":\"10.1109/HPCSim.2012.6266912\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The research of a parallelization efficiency of a batch pattern training algorithm of a multilayer perceptron on computational clusters is presented in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is presented. The efficiency of parallelization of the developed algorithm is investigated on the progressive increasing the dimension of the parallelized problem. The results of the experimental researches show that (i) the cluster with Infiniband interconnection shows better values of parallelization efficiency in comparison with general-purpose parallel computer with cc numa architecture due to lower communication overhead and (ii) the parallelization efficiency of the algorithm is high enough for its appropriate usage on general-purpose clusters and parallel computers available within modern computational grids.\",\"PeriodicalId\":428764,\"journal\":{\"name\":\"2012 International Conference on High Performance Computing & Simulation (HPCS)\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-07-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 International Conference on High Performance Computing & Simulation (HPCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HPCSim.2012.6266912\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 International Conference on High Performance Computing & Simulation (HPCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HPCSim.2012.6266912","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Parallel batch pattern training of neural networks on computational clusters
The research of a parallelization efficiency of a batch pattern training algorithm of a multilayer perceptron on computational clusters is presented in this paper. The multilayer perceptron model and the usual sequential batch pattern training algorithm are theoretically described. An algorithmic description of the parallel version of the batch pattern training method is presented. The efficiency of parallelization of the developed algorithm is investigated on the progressive increasing the dimension of the parallelized problem. The results of the experimental researches show that (i) the cluster with Infiniband interconnection shows better values of parallelization efficiency in comparison with general-purpose parallel computer with cc numa architecture due to lower communication overhead and (ii) the parallelization efficiency of the algorithm is high enough for its appropriate usage on general-purpose clusters and parallel computers available within modern computational grids.