{"title":"为提高全量化神经网络加速器的精度设计高效快捷体系结构","authors":"Baoting Li, Longjun Liu, Yan-ming Jin, Peng Gao, Hongbin Sun, Nanning Zheng","doi":"10.1109/ASP-DAC47756.2020.9045739","DOIUrl":null,"url":null,"abstract":"Network quantization is an effective solution to compress Deep Neural Networks (DNN) that can be accelerated with custom circuit. However, existing quantization methods suffer from significant loss in accuracy. In this paper, we propose an efficient shortcut architecture to enhance the representational capability of DNN between different convolution layers. We further implement the shortcut hardware architecture to effectively improve the accuracy of fully quantized neural networks accelerator. The experimental results show that our shortcut architecture can obviously improve network accuracy while increasing very few hardware resources $( 0.11 \\times$ and $0.17 \\times$ for LUT and FF respectively) compared with the whole accelerator.","PeriodicalId":125112,"journal":{"name":"2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Designing Efficient Shortcut Architecture for Improving the Accuracy of Fully Quantized Neural Networks Accelerator\",\"authors\":\"Baoting Li, Longjun Liu, Yan-ming Jin, Peng Gao, Hongbin Sun, Nanning Zheng\",\"doi\":\"10.1109/ASP-DAC47756.2020.9045739\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Network quantization is an effective solution to compress Deep Neural Networks (DNN) that can be accelerated with custom circuit. However, existing quantization methods suffer from significant loss in accuracy. In this paper, we propose an efficient shortcut architecture to enhance the representational capability of DNN between different convolution layers. We further implement the shortcut hardware architecture to effectively improve the accuracy of fully quantized neural networks accelerator. The experimental results show that our shortcut architecture can obviously improve network accuracy while increasing very few hardware resources $( 0.11 \\\\times$ and $0.17 \\\\times$ for LUT and FF respectively) compared with the whole accelerator.\",\"PeriodicalId\":125112,\"journal\":{\"name\":\"2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ASP-DAC47756.2020.9045739\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ASP-DAC47756.2020.9045739","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Designing Efficient Shortcut Architecture for Improving the Accuracy of Fully Quantized Neural Networks Accelerator
Network quantization is an effective solution to compress Deep Neural Networks (DNN) that can be accelerated with custom circuit. However, existing quantization methods suffer from significant loss in accuracy. In this paper, we propose an efficient shortcut architecture to enhance the representational capability of DNN between different convolution layers. We further implement the shortcut hardware architecture to effectively improve the accuracy of fully quantized neural networks accelerator. The experimental results show that our shortcut architecture can obviously improve network accuracy while increasing very few hardware resources $( 0.11 \times$ and $0.17 \times$ for LUT and FF respectively) compared with the whole accelerator.