{"title":"用于简单嵌入式应用的深度学习卷积架构","authors":"Chan Kim, Yong Cheol Peter Cho, Youngsu Kwon","doi":"10.1109/ICCE-Berlin.2017.8210595","DOIUrl":null,"url":null,"abstract":"A simple AXI based convolution architecture for deep learning is presented. Input feature maps and kernel weights are stored in P K∗K memory blocks and convolution is done from output feature map 0 to M-1, and inside a feature map, output is generated in raster scan order. Data from P input feature maps are summed in parallel during convolution. It is possible to provide P K∗K input feature map data, P K∗K weights and the bias for the input and output feature maps being processed by manipulating the read addresses and read data alignment. Dual buffers are used to perform convolution for output feature map while DMA write for previous final output feature map is in progress. Correct operation was verified by comparing RTL simulation and C program run results. This method provides over 2,000 speed-up compared to pure software method and with flow control between DMA and convolution, much less memory can be used. This architecture can be used for convolution acceleration for moderate deep learning applications on embedded systems.","PeriodicalId":355536,"journal":{"name":"2017 IEEE 7th International Conference on Consumer Electronics - Berlin (ICCE-Berlin)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deep learning convolution architecture for simple embedded applications\",\"authors\":\"Chan Kim, Yong Cheol Peter Cho, Youngsu Kwon\",\"doi\":\"10.1109/ICCE-Berlin.2017.8210595\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A simple AXI based convolution architecture for deep learning is presented. Input feature maps and kernel weights are stored in P K∗K memory blocks and convolution is done from output feature map 0 to M-1, and inside a feature map, output is generated in raster scan order. Data from P input feature maps are summed in parallel during convolution. It is possible to provide P K∗K input feature map data, P K∗K weights and the bias for the input and output feature maps being processed by manipulating the read addresses and read data alignment. Dual buffers are used to perform convolution for output feature map while DMA write for previous final output feature map is in progress. Correct operation was verified by comparing RTL simulation and C program run results. This method provides over 2,000 speed-up compared to pure software method and with flow control between DMA and convolution, much less memory can be used. This architecture can be used for convolution acceleration for moderate deep learning applications on embedded systems.\",\"PeriodicalId\":355536,\"journal\":{\"name\":\"2017 IEEE 7th International Conference on Consumer Electronics - Berlin (ICCE-Berlin)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 IEEE 7th International Conference on Consumer Electronics - Berlin (ICCE-Berlin)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCE-Berlin.2017.8210595\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE 7th International Conference on Consumer Electronics - Berlin (ICCE-Berlin)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCE-Berlin.2017.8210595","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
提出了一种简单的基于AXI的深度学习卷积结构。输入特征映射和核权值存储在P K * K内存块中,并从输出特征映射0到M-1进行卷积,并且在特征映射中,以光栅扫描顺序生成输出。在卷积过程中,对P个输入特征映射的数据并行求和。可以提供P K∗K个输入特征映射数据,P K∗K个权重和输入和输出特征映射的偏置,通过操纵读地址和读数据对齐来处理。双缓冲区用于对输出特征图进行卷积,同时对前一个最终输出特征图进行DMA写入。通过对比RTL仿真和C程序运行结果,验证了该方法的正确性。与纯软件方法相比,该方法提供了超过2000的加速,并且在DMA和卷积之间进行流量控制,可以使用更少的内存。该架构可用于嵌入式系统中中等深度学习应用的卷积加速。
A deep learning convolution architecture for simple embedded applications
A simple AXI based convolution architecture for deep learning is presented. Input feature maps and kernel weights are stored in P K∗K memory blocks and convolution is done from output feature map 0 to M-1, and inside a feature map, output is generated in raster scan order. Data from P input feature maps are summed in parallel during convolution. It is possible to provide P K∗K input feature map data, P K∗K weights and the bias for the input and output feature maps being processed by manipulating the read addresses and read data alignment. Dual buffers are used to perform convolution for output feature map while DMA write for previous final output feature map is in progress. Correct operation was verified by comparing RTL simulation and C program run results. This method provides over 2,000 speed-up compared to pure software method and with flow control between DMA and convolution, much less memory can be used. This architecture can be used for convolution acceleration for moderate deep learning applications on embedded systems.