{"title":"Efficient Hardware Acceleration of Convolutional Neural Networks","authors":"S. Kala, B. R. Jose, J. Mathew, N. Sivanandan","doi":"10.1109/SOCC46988.2019.1570573948","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs) have emerged as the most efficient technique for solving a host of machine learning tasks. Compute and memory intensive nature of CNN has stimulated lot of work in hardware acceleration of these network models. FPGAs have emerged as a promising approach for accelerating CNNs, due to its high performance, flexibility and energy efficiency. We propose a unified architecture named UniWiG, where both Winograd based convolution and general matrix multiplication (GEMM) can be accelerated using the same set of processing elements. Proposed architecture has been used to accelerate AlexNet and VGG-16 models on FPGA with a performance of 433.63 GOPS and 407.23 GOPS respectively. We have also analyzed the performance with varying Winograd tile sizes and found out the most appropriate tile sizes for maximizing the performance while reducing on-chip memory resource.","PeriodicalId":253998,"journal":{"name":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 32nd IEEE International System-on-Chip Conference (SOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SOCC46988.2019.1570573948","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Convolutional neural networks (CNNs) have emerged as the most efficient technique for solving a host of machine learning tasks. Compute and memory intensive nature of CNN has stimulated lot of work in hardware acceleration of these network models. FPGAs have emerged as a promising approach for accelerating CNNs, due to its high performance, flexibility and energy efficiency. We propose a unified architecture named UniWiG, where both Winograd based convolution and general matrix multiplication (GEMM) can be accelerated using the same set of processing elements. Proposed architecture has been used to accelerate AlexNet and VGG-16 models on FPGA with a performance of 433.63 GOPS and 407.23 GOPS respectively. We have also analyzed the performance with varying Winograd tile sizes and found out the most appropriate tile sizes for maximizing the performance while reducing on-chip memory resource.