Mario Porrmann, U. Witkowski, Heiko Kalte, U. Rückert
{"title":"人工神经网络在可重构硬件加速器上的实现","authors":"Mario Porrmann, U. Witkowski, Heiko Kalte, U. Rückert","doi":"10.1109/EMPDP.2002.994279","DOIUrl":null,"url":null,"abstract":"The hardware implementations of three different artificial neural networks are presented. The basis for the implementations is the reconfigurable hardware accelerator RAPTOR2000, which is based on FPGAs. The investigated neural network architectures are neural associative memories, self-organizing feature maps and basis function networks. Some of the key implementation issues are considered. In particular, the resource efficiency and performance of the presented realizations are discussed.","PeriodicalId":126071,"journal":{"name":"Proceedings 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"38","resultStr":"{\"title\":\"Implementation of artificial neural networks on a reconfigurable hardware accelerator\",\"authors\":\"Mario Porrmann, U. Witkowski, Heiko Kalte, U. Rückert\",\"doi\":\"10.1109/EMPDP.2002.994279\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The hardware implementations of three different artificial neural networks are presented. The basis for the implementations is the reconfigurable hardware accelerator RAPTOR2000, which is based on FPGAs. The investigated neural network architectures are neural associative memories, self-organizing feature maps and basis function networks. Some of the key implementation issues are considered. In particular, the resource efficiency and performance of the presented realizations are discussed.\",\"PeriodicalId\":126071,\"journal\":{\"name\":\"Proceedings 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing\",\"volume\":\"38 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"38\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EMPDP.2002.994279\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 10th Euromicro Workshop on Parallel, Distributed and Network-based Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EMPDP.2002.994279","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Implementation of artificial neural networks on a reconfigurable hardware accelerator
The hardware implementations of three different artificial neural networks are presented. The basis for the implementations is the reconfigurable hardware accelerator RAPTOR2000, which is based on FPGAs. The investigated neural network architectures are neural associative memories, self-organizing feature maps and basis function networks. Some of the key implementation issues are considered. In particular, the resource efficiency and performance of the presented realizations are discussed.