{"title":"神经网络在超立方体SIMD阵列上的高效实现","authors":"K. Kim, V.K.P. Kumar","doi":"10.1109/IJCNN.1989.118455","DOIUrl":null,"url":null,"abstract":"Summary form only given, as follows. An efficient parallel implementation of neural networks on Hypercube SIMD arrays is presented. The authors show a mapping of a neural network having n nodes and e connections onto a Hypercube array having (n+e) processing elements such that each update step of the neural network can be performed in 8 log/sub 2/ (n+e)-3 steps by preprocessing the weight matrix. The technique is simple and efficient and can be used on current parallel machines such as the Connection Machine.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":"{\"title\":\"Efficient implementation of neural networks on Hypercube SIMD arrays\",\"authors\":\"K. Kim, V.K.P. Kumar\",\"doi\":\"10.1109/IJCNN.1989.118455\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Summary form only given, as follows. An efficient parallel implementation of neural networks on Hypercube SIMD arrays is presented. The authors show a mapping of a neural network having n nodes and e connections onto a Hypercube array having (n+e) processing elements such that each update step of the neural network can be performed in 8 log/sub 2/ (n+e)-3 steps by preprocessing the weight matrix. The technique is simple and efficient and can be used on current parallel machines such as the Connection Machine.<<ETX>>\",\"PeriodicalId\":199877,\"journal\":{\"name\":\"International 1989 Joint Conference on Neural Networks\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"13\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International 1989 Joint Conference on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1989.118455\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International 1989 Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1989.118455","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Efficient implementation of neural networks on Hypercube SIMD arrays
Summary form only given, as follows. An efficient parallel implementation of neural networks on Hypercube SIMD arrays is presented. The authors show a mapping of a neural network having n nodes and e connections onto a Hypercube array having (n+e) processing elements such that each update step of the neural network can be performed in 8 log/sub 2/ (n+e)-3 steps by preprocessing the weight matrix. The technique is simple and efficient and can be used on current parallel machines such as the Connection Machine.<>