{"title":"深度卷积神经网络的流水线节能硬件加速","authors":"Hmidi Alaeddine, Malek Jihene","doi":"10.1109/DTSS.2019.8915295","DOIUrl":null,"url":null,"abstract":"In this paper, a new architecture of an accelerator of a convolutional neural network is proposed. The suggested solution is pipelined and it reduces the band passing memory through the exploitation of sliding window images. Moreover, it is reconfigurable online at a convolution stride level. This proposal operates at a frequency equivalent to 280 mhz and offers a performance of 3.36 GMAC. The energy consumption is 475mw.","PeriodicalId":342516,"journal":{"name":"2019 IEEE International Conference on Design & Test of Integrated Micro & Nano-Systems (DTS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Pipelined Energy-efficient Hardware Accelaration for Deep Convolutional Neural Networks\",\"authors\":\"Hmidi Alaeddine, Malek Jihene\",\"doi\":\"10.1109/DTSS.2019.8915295\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, a new architecture of an accelerator of a convolutional neural network is proposed. The suggested solution is pipelined and it reduces the band passing memory through the exploitation of sliding window images. Moreover, it is reconfigurable online at a convolution stride level. This proposal operates at a frequency equivalent to 280 mhz and offers a performance of 3.36 GMAC. The energy consumption is 475mw.\",\"PeriodicalId\":342516,\"journal\":{\"name\":\"2019 IEEE International Conference on Design & Test of Integrated Micro & Nano-Systems (DTS)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Conference on Design & Test of Integrated Micro & Nano-Systems (DTS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DTSS.2019.8915295\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Conference on Design & Test of Integrated Micro & Nano-Systems (DTS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DTSS.2019.8915295","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Pipelined Energy-efficient Hardware Accelaration for Deep Convolutional Neural Networks
In this paper, a new architecture of an accelerator of a convolutional neural network is proposed. The suggested solution is pipelined and it reduces the band passing memory through the exploitation of sliding window images. Moreover, it is reconfigurable online at a convolution stride level. This proposal operates at a frequency equivalent to 280 mhz and offers a performance of 3.36 GMAC. The energy consumption is 475mw.