Kaiyuan Guo, Lingzhi Sui, Jiantao Qiu, Song Yao, Song Han, Yu Wang, Huazhong Yang
{"title":"从模型到FPGA:高效神经网络加速的软硬件协同设计","authors":"Kaiyuan Guo, Lingzhi Sui, Jiantao Qiu, Song Yao, Song Han, Yu Wang, Huazhong Yang","doi":"10.1109/HOTCHIPS.2016.7936208","DOIUrl":null,"url":null,"abstract":"Artificial neural networks, which dominate artificial intelligence applications such as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. FPGA can be an ideal platform for neural network acceleration since it is programmable and can achieve much higher energy efficiency compared with general-purpose processors. However, the long development period and insufficient performance of traditional FPGA acceleration solutions prevent it from wide utilization. In this work, we propose a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN will be introduced together with the compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with realworld neural networks, up to 10 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU.","PeriodicalId":363333,"journal":{"name":"2016 IEEE Hot Chips 28 Symposium (HCS)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"42","resultStr":"{\"title\":\"From model to FPGA: Software-hardware co-design for efficient neural network acceleration\",\"authors\":\"Kaiyuan Guo, Lingzhi Sui, Jiantao Qiu, Song Yao, Song Han, Yu Wang, Huazhong Yang\",\"doi\":\"10.1109/HOTCHIPS.2016.7936208\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial neural networks, which dominate artificial intelligence applications such as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. FPGA can be an ideal platform for neural network acceleration since it is programmable and can achieve much higher energy efficiency compared with general-purpose processors. However, the long development period and insufficient performance of traditional FPGA acceleration solutions prevent it from wide utilization. In this work, we propose a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN will be introduced together with the compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with realworld neural networks, up to 10 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU.\",\"PeriodicalId\":363333,\"journal\":{\"name\":\"2016 IEEE Hot Chips 28 Symposium (HCS)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"42\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Hot Chips 28 Symposium (HCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/HOTCHIPS.2016.7936208\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Hot Chips 28 Symposium (HCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/HOTCHIPS.2016.7936208","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
From model to FPGA: Software-hardware co-design for efficient neural network acceleration
Artificial neural networks, which dominate artificial intelligence applications such as object recognition and speech recognition, are in evolution. To apply neural networks to wider applications, customized hardware are necessary since CPU and GPU are not efficient enough. FPGA can be an ideal platform for neural network acceleration since it is programmable and can achieve much higher energy efficiency compared with general-purpose processors. However, the long development period and insufficient performance of traditional FPGA acceleration solutions prevent it from wide utilization. In this work, we propose a complete design flow to achieve both fast deployment and high energy efficiency for accelerating neural networks on FPGA. Deep compression and data quantization are employed to exploit the redundancy in algorithm and reduce both computational and memory complexity. Two architecture designs for CNN and DNN/RNN will be introduced together with the compilation environment. Evaluated on Xilinx Zynq 7000 and Kintex Ultrascale series FPGA with realworld neural networks, up to 10 times higher energy efficiency can be achieved compared with mobile GPU and desktop GPU.