{"title":"它是并行的两种范例:神经网络和通用MIMD计算机之间的桥梁","authors":"Y. Boniface, F. Alexandre, S. Vialle","doi":"10.1109/IJCNN.1999.833453","DOIUrl":null,"url":null,"abstract":"Hardware developments have led to the use of shared memory as an efficient parallel programming method. The main goals of the work reported here are to speed up executions and to decrease development time of parallel neural network implementations. To allow for such implementations, a library has been defined, as a bridge between neural networks and general purpose MIMD computer parallelisms.","PeriodicalId":157719,"journal":{"name":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1999-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"A bridge between two paradigms for parallelism: neural networks and general purpose MIMD computers\",\"authors\":\"Y. Boniface, F. Alexandre, S. Vialle\",\"doi\":\"10.1109/IJCNN.1999.833453\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hardware developments have led to the use of shared memory as an efficient parallel programming method. The main goals of the work reported here are to speed up executions and to decrease development time of parallel neural network implementations. To allow for such implementations, a library has been defined, as a bridge between neural networks and general purpose MIMD computer parallelisms.\",\"PeriodicalId\":157719,\"journal\":{\"name\":\"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1999-07-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.1999.833453\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.1999.833453","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A bridge between two paradigms for parallelism: neural networks and general purpose MIMD computers
Hardware developments have led to the use of shared memory as an efficient parallel programming method. The main goals of the work reported here are to speed up executions and to decrease development time of parallel neural network implementations. To allow for such implementations, a library has been defined, as a bridge between neural networks and general purpose MIMD computer parallelisms.