{"title":"FPAP:卷积神经网络高效计算的折叠结构","authors":"Yizhi Wang, Jun Lin, Zhongfeng Wang","doi":"10.1109/ISVLSI.2018.00098","DOIUrl":null,"url":null,"abstract":"Convolutional neural networks (CNNs) have found extensive applications in practice. However, weight/activation's sparsity and different data precision requirements across layers lead to a large amount of redundant computations. In this paper, we propose an efficient architecture for CNNs, named Folded Precision-Adjustable Processor (FPAP), to skip those unnecessary computations with ease. Computations are folded in the following two aspects to achieve efficient computing. On one hand, the dominant multiply-and-add (MAC) operations are performed bit-serially based on a bit-pair encoding algorithm so that the FPAP can adapt to different numerical precisions without using multipliers with long data width. On the other hand, a 1-D convolution is undertaken by a multi-tap transposed finite impulse response (FIR) filter, which is folded into one tap so that computations involving zero activations and weights can be easily skipped. Equipped with the precision-adjustable MAC unit and the folded FIR filter structure, a well-designed array architecture, consisting of many identical processing elements is developed, which is scalable for different throughput requirements and highly flexible for different numerical precisions. Besides, a novel genetic algorithm based kernel reallocation scheme is introduced to mitigate the load imbalance issue. Our synthesis results demonstrate that the proposed FPAP can significantly reduce the logic complexity and the critical path over the corresponding unfolded design, which only delivers slightly higher throughput when processing sparse and compact models. Our experiments also show that FPAP can scale its energy efficiency from 1.01TOP/s/W to 6.26TOP/s/W under 90nm CMOS technology when different data precisions are used.","PeriodicalId":114330,"journal":{"name":"2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"FPAP: A Folded Architecture for Efficient Computing of Convolutional Neural Networks\",\"authors\":\"Yizhi Wang, Jun Lin, Zhongfeng Wang\",\"doi\":\"10.1109/ISVLSI.2018.00098\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Convolutional neural networks (CNNs) have found extensive applications in practice. However, weight/activation's sparsity and different data precision requirements across layers lead to a large amount of redundant computations. In this paper, we propose an efficient architecture for CNNs, named Folded Precision-Adjustable Processor (FPAP), to skip those unnecessary computations with ease. Computations are folded in the following two aspects to achieve efficient computing. On one hand, the dominant multiply-and-add (MAC) operations are performed bit-serially based on a bit-pair encoding algorithm so that the FPAP can adapt to different numerical precisions without using multipliers with long data width. On the other hand, a 1-D convolution is undertaken by a multi-tap transposed finite impulse response (FIR) filter, which is folded into one tap so that computations involving zero activations and weights can be easily skipped. Equipped with the precision-adjustable MAC unit and the folded FIR filter structure, a well-designed array architecture, consisting of many identical processing elements is developed, which is scalable for different throughput requirements and highly flexible for different numerical precisions. Besides, a novel genetic algorithm based kernel reallocation scheme is introduced to mitigate the load imbalance issue. Our synthesis results demonstrate that the proposed FPAP can significantly reduce the logic complexity and the critical path over the corresponding unfolded design, which only delivers slightly higher throughput when processing sparse and compact models. Our experiments also show that FPAP can scale its energy efficiency from 1.01TOP/s/W to 6.26TOP/s/W under 90nm CMOS technology when different data precisions are used.\",\"PeriodicalId\":114330,\"journal\":{\"name\":\"2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)\",\"volume\":\"57 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISVLSI.2018.00098\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2018.00098","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
FPAP: A Folded Architecture for Efficient Computing of Convolutional Neural Networks
Convolutional neural networks (CNNs) have found extensive applications in practice. However, weight/activation's sparsity and different data precision requirements across layers lead to a large amount of redundant computations. In this paper, we propose an efficient architecture for CNNs, named Folded Precision-Adjustable Processor (FPAP), to skip those unnecessary computations with ease. Computations are folded in the following two aspects to achieve efficient computing. On one hand, the dominant multiply-and-add (MAC) operations are performed bit-serially based on a bit-pair encoding algorithm so that the FPAP can adapt to different numerical precisions without using multipliers with long data width. On the other hand, a 1-D convolution is undertaken by a multi-tap transposed finite impulse response (FIR) filter, which is folded into one tap so that computations involving zero activations and weights can be easily skipped. Equipped with the precision-adjustable MAC unit and the folded FIR filter structure, a well-designed array architecture, consisting of many identical processing elements is developed, which is scalable for different throughput requirements and highly flexible for different numerical precisions. Besides, a novel genetic algorithm based kernel reallocation scheme is introduced to mitigate the load imbalance issue. Our synthesis results demonstrate that the proposed FPAP can significantly reduce the logic complexity and the critical path over the corresponding unfolded design, which only delivers slightly higher throughput when processing sparse and compact models. Our experiments also show that FPAP can scale its energy efficiency from 1.01TOP/s/W to 6.26TOP/s/W under 90nm CMOS technology when different data precisions are used.