{"title":"DNN2FPGA:在fpga上实现dnn的通用设计流程","authors":"El Hadrami Cheikh Tourad, M. Eleuldj","doi":"10.1145/3454127.3456597","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) have recently indicated a considerable advantage in many deep learning tasks such as image classification and speech recognition. However, the achievement of high-performance DNNs has accompanied an expansion in computing and memory requirements. Due to these features, the Field-Programmable Gate Arrays (FPGA) devices are ideal for deploying DNNs, and they have the required flexibility, power efficiency, and computing performance. The implementation of DNN on FPGA is usually done using a high-level language such as python, followed by a manual transformation to Hardware Description Language (HDL), and finally, the synthesis using a vendor tool. This transformation is time-consuming and requires HDL expertise, which limits the relevance of FPGAs. The paper reviews some related works, shows the proposed design flow, then the hardware implementation, and shows two case study results: A Multi-Layer Perceptron (MLP) used to solve the classical XOR problem DNN for MNIST dataset classification. Finally, we present the conclusion and future works. This paper presents a new generic design flow of implementing DNN models automatically from the high-level language to FPGA devices, which takes the model in graph presentation as input and automatically generates the FPGA’s hardware implementations.","PeriodicalId":432206,"journal":{"name":"Proceedings of the 4th International Conference on Networking, Information Systems & Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DNN2FPGA: Generic Design Flow to Implement DNNs on FPGAs\",\"authors\":\"El Hadrami Cheikh Tourad, M. Eleuldj\",\"doi\":\"10.1145/3454127.3456597\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep Neural Networks (DNNs) have recently indicated a considerable advantage in many deep learning tasks such as image classification and speech recognition. However, the achievement of high-performance DNNs has accompanied an expansion in computing and memory requirements. Due to these features, the Field-Programmable Gate Arrays (FPGA) devices are ideal for deploying DNNs, and they have the required flexibility, power efficiency, and computing performance. The implementation of DNN on FPGA is usually done using a high-level language such as python, followed by a manual transformation to Hardware Description Language (HDL), and finally, the synthesis using a vendor tool. This transformation is time-consuming and requires HDL expertise, which limits the relevance of FPGAs. The paper reviews some related works, shows the proposed design flow, then the hardware implementation, and shows two case study results: A Multi-Layer Perceptron (MLP) used to solve the classical XOR problem DNN for MNIST dataset classification. Finally, we present the conclusion and future works. This paper presents a new generic design flow of implementing DNN models automatically from the high-level language to FPGA devices, which takes the model in graph presentation as input and automatically generates the FPGA’s hardware implementations.\",\"PeriodicalId\":432206,\"journal\":{\"name\":\"Proceedings of the 4th International Conference on Networking, Information Systems & Security\",\"volume\":\"5 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 4th International Conference on Networking, Information Systems & Security\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3454127.3456597\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 4th International Conference on Networking, Information Systems & Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3454127.3456597","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
DNN2FPGA: Generic Design Flow to Implement DNNs on FPGAs
Deep Neural Networks (DNNs) have recently indicated a considerable advantage in many deep learning tasks such as image classification and speech recognition. However, the achievement of high-performance DNNs has accompanied an expansion in computing and memory requirements. Due to these features, the Field-Programmable Gate Arrays (FPGA) devices are ideal for deploying DNNs, and they have the required flexibility, power efficiency, and computing performance. The implementation of DNN on FPGA is usually done using a high-level language such as python, followed by a manual transformation to Hardware Description Language (HDL), and finally, the synthesis using a vendor tool. This transformation is time-consuming and requires HDL expertise, which limits the relevance of FPGAs. The paper reviews some related works, shows the proposed design flow, then the hardware implementation, and shows two case study results: A Multi-Layer Perceptron (MLP) used to solve the classical XOR problem DNN for MNIST dataset classification. Finally, we present the conclusion and future works. This paper presents a new generic design flow of implementing DNN models automatically from the high-level language to FPGA devices, which takes the model in graph presentation as input and automatically generates the FPGA’s hardware implementations.